There are a set of processes involved inside the 3D Software. The only thing that can be transferred into a different environment is the rendering part as that is the only part where any human interaction is not required. Also, that is the only process that requires the highest amount of graphical and computational power. The project would be completed over the personal computer, and it is sent to the render farm. This is the same way as the other render farms work. Here, we send the file over the cloud where the rendering is performed in an efficient way which would in turn cost us less. The rendering process will be completed over the cloud, and we can access the rendered output of the project. Deploying the render farm over the cloud has many advantages, they are:
-
In the cloud computing, we can change the type of instance any time we want, so that we can use a powerful computer when required and use a low powered one when the computational power is not required.
-
Rendering process is performed using the concept of cluster computing, where a set/cluster of computers divide the main task among themselves and work parallelly to complete the work faster and with more efficiency
The detailed process of rendering is illustrated in Fig. 1. There are three kinds of nodes used in this cluster. The first one is the Master node. This is the only node by which users can interact. This node takes the project file and the required details like frames to rendered, etc, analyses and divides the task into subtasks based on the number of worker nodes, and then sends the work along with the project file to respective worker nodes. The worker nodes are the ones who are going to perform the process of rendering. Worker nodes receive their task from the master node, render the specific frames and return the output to the master node again. Here the worker node also sends the frames to the backup node. The role of backup node is to maintain a copy of all the rendered files sent from worker node to the master node, in case of any data discrepancy in the master node or the worker node after the process gets completed.
All the computers in the cluster may or may not be from a single server. Even if they are from a single server, there is no physical connection between the nodes we use. Then how could this process be achieved? That can be achieved using some tools like Ansible and SCP. Ansible [7] automates remote system management and maintains the desired state of the systems. There are three key parts to an Ansible environment at its core. The first is the control node (or Master node) on which Ansible is installed, and all the other nodes are controlled via this node. Next one is the managed node (or Worker node) which is a remote system that Ansible from the Control node controls. Finally, the inventory, which is a file in the control node that lists all the managed nodes. Ansible operates by establishing connections with nodes on a network and then delivering each node a little program called an Ansible module. Ansible runs these modules over SSH, and after they are done, they are deleted. The managed nodes must allow login access from your Ansible control node for this interaction to work. The most popular method of granting access is through SSH keys; however other types of authentications are also available [8, 9]. Thus, for sending the instructions from the master node to the worker nodes, Ansible is used. Also, Ansible only requires the starting point and the desired ending point. The steps involved to go to start to end will be taken care by Ansible itself. The Secure Copy Protocol or “SCP” helps to transfer computer files securely from a local to a remote host. The SCP runs on Port 22 [10]. SCP transfers the files from one node to another with authentication and security added to that. So, the data that is being transferred remains confidential, and hence the SCP can be used to successfully block packet sniffers that can extract valuable information from the data packets.
As we are working on cloud instances, there is a risk of data leakage or insecurity. This can be avoided by using a Virtual Private Cloud in AWS as shown in Fig. 2. As stated earlier, in VPC, the computers don’t have any relation with the other computers on the same server and a security group added to the VPC, will restrict the access of certain IP Addresses over certain ports. The overall process of the proposed method is depicted in Fig. 3. Taking input from the user regarding the details like frames to be rendered, the number of worker nodes created in the cluster in the AWS EC2 management console and the IP Addresses of the nodes. Dividing the total frames to be rendered into separate arrays according to the number of worker nodes in the cluster. Creating the three configuration files for the proper functionality of Ansible from the data received from the user. Running the syntax check for the created playbook file to make sure there are no errors in the created playbook file.
