Its been quite a fun week! As I had done a majority of testing for Maya rendering with a single computer, I hit a bit of a snag when attempting rendering on another device. This problem isn't unique to this situation, because I actually had it when trying to open the file the first time as it had been years since I worked with Maya on a animation project. So while I had two distinct workflows where you could take either a Arnold Scence Source export ".ass extension" or a zip file of multiple ".ass" files, or even a .MB file, they all result in a rendered image with no textures.
So a change in strategy was needed, and not as straightforward from my end. I did get this to work, but the process now is to go into Maya, ensure that all textures and references in the outliner are imported, then go to File, Archive Scene. This creates a zip file with all required textures in their respective directories. Another issue is that from there, importing into another computer also seems to keep the textures unmapped, as they could be using absolute paths vs relative paths (think C:\maya\textures vs \sourceimages).
All in all the process is functional now in its beta phase. There is no charge for the service as of yet so you're welcome to try it out. Each node within the Hypernet which is capable of processing Maya (meaning it has the software installed, licensed, and will not leave a watermark) will be able to take on rendering a frame or more of the animation. Each client will effectively get the entire project zip file, extract it, remap textures via a script, and render the frame that it was meant to process. Although the nature of this operation is temporary, we will look towards isolating the execution of rendering using containerization so that the processing nodes are not able to view the data being processed. Additionally, there will also be a private node pool option so that if privacy is a concern, then the nodes selected can only be your own. To tease one more idea, a cloud computation model is being evaluated so that for a fee, temporary nodes can be created and destroyed once the entire rendering job is complete. In that case, the service cost would directly be controlled by Azure, although the costs can be significantly smaller than what someone would pay to create a cloud VM, set up rendering software, and pay the continued usage cost. These different computation methodologies will happen over time with hyperwave, but the main note to understand in this article is that rendering has a single path available right now by archiving the scene and getting back an export of frames in a zip file. Do you have any other ideas that would make this solution better for your use case? If so, send us a mail and we'd love to connect!