![]() ![]() Yes, You could shoot your footage at a very high shutter speed and eliminate the motion blur in camera. If you were to have any amount of motion blur in your "voodoo" z pass it would no longer serve as an accurate grey-scale depth map. One of the key things you have overlooked is motion blur. ![]() There are many other things to take into account to avoid ugly edge artefacting and create a natural looking image. This is how professional compositors do it. This is why most 3d rendering is done in separate passes with the DOF applied to each layer separately. The software has to 'make up' what it thinks is behind the subject. This creates an unnatural halo effect around objects where the information has to be interpolated by the blurring software. The first major issue is that you cannot achieve an accurately natural background blur as your image will not have the information behind the foreground elements which occlude them. There are many caveats of doing DOF as a post process using plugins like frischluft LensCare. Now lets say even by some voodoo magic you are able to obtain a z depth image for each frame of video you shoot on your humble DSLR. Even in CGI, This is not trivial to do well. I have spent a great deal of time trying to perfect the process of adding DOF to a single beauty pass render in conjunction with a z depth pass. I too am a 3d artist who has been working in games and VFX for the last 15 years. This gives you Z buffer values from 0 to 1, far to near, meaning the default depth buffer ranges in FL Depth of Field are already correct.When i read the first post on this thread i actually laughed. I've found the best settings to use when outputting a Z depth AOV from Redshift are: Filter - Min Depth, Depth Mode - Z Normalized Inverted. You can, of course, have the focal point of the camera nearer or further than the z-buffer covers, which is why the Focal distance slider allows values above 255 and below 0. That means the depth buffer is mapped to 0-255 internally: 0 = far point, 255 = nearest point. The clue is in the docs the FL DoF plugin uses 8 bits for depth (regardless of the depth of the z-buffer image connected). The Focal distance slider goes from -100 to +355, and it's not immediately clear how this relates to near and far points in your scene. Focal distance slider values are a little unclear If you need to read the depth of a particular thing in your image you can always hover over the z-buffer image and read the value directly from Fusion's status bar.ģ. Avoid using it if it causes problems on your scene/system. Seems to be a bug on the Fusion side of things. Using the focal distance picker is very crashy You can judge and adjust the blur pretty well with the standard circular iris, then reconnect the custom one to preview, or before rendering.Ģ. Workaround: Disconnect the custom iris image while adjusting parameters. It seems to happen more with large frame sizes, and it seems to only happen if you have a custom iris image connected. Frischluft is looking into it, but it seems to be a call to a Fusion function that fails, making it hard to debug. In some situations, adjusting FL Depth of Field parameters can cause immediate seg faults, making Fusion terminate. FL Depth of Field crashing / seg faults / SIGSEGV / crash to desktop I'm not connected with FL in any way, I'm just in love with being able to use custom iris/bokeh imagesġ. Frischluft's developer has been very helpful, so I wanted to put some notes here in case they help anyone else. I've been using Frischluft's OFX Lenscare plugins within Fusion, and hit some issues. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |