Tag Archives: photography

LOADR rigging

Computer is working better than ever after getting the full 24gb RAM configured, and playing with a little rigging in maya digital version of my photo scanned stop motion puppet
.  .  .

The #LOADR has working joints now, but still working on the skinning to fix a few spots. The full clip is here https://youtu.be/2El86ZvSds4
.  .  .

Gynoid Robot Doll #5

This week I’m sending off one of my ball joint dolls, the Robot gynoid, the fifth of the series.


Its always sad to see them go, but I’m glad she found a good home, and I will be casting a new round of dolls very soon.


This was available through the Zerofriends store, where more will be posted early next month. See more photos in the full gallery; here;

You can see my work in progress ball joint dolls in the flickr Gallery gallery here;

‘Reverse’ IBL

I watched ‘Beyond the Black Rainbow’ last weekend and put together this test video,

This weekend’s tests were all about getting brainstorming on ways to improve Cambot and doing a bit of shooting in the studio. The studio time definitely helped to see what is working on Cambot and where it can use some improvements. The goal of my shoot was to experiment with lights.

I’ve spent some time before working with traditional Image based lighting, where I basically used a reference image from a location to digitally light a 3D model.


The reference image is typically a panoramic image unwrapped from a photo of a chrome sphere. The chrome sphere reflects a full 180 degree image of the reflection and in doing so the reflected light which would be cast upon the object. Typically a gray sphere is photographed at the same time for comparison with the in-progress/finished model.


I want to begin the same way, background locations photographed with reference spheres, then I want to photograph practical scale models to place in them. Typically this is achieved by attempting to mimick the lighting conditions, i.e. “the sun was here so we place a key light here, with a fill light on this side and…”. Then through a process of ‘match-lighting’ a cinematographer/Director of photography can reproduce the location’s lighting.

Why ‘Reverse’?
Now I want to turn this all around. In theory by using these reference images it should be possible to recreate the environment lighting on demand when photographing a practical model or actor. All that would need to happen is for a directional light source to project onto the surface of the model with the same hue and intensity as the reference.

This definitely isn’t a new idea, Paul Debevec, developed this years ago through ICT with his lightstage, pictured above, which I’ve linked here rather than attempting to further summarize it;

My approach is through the use of DMX stage lighting. There are multi colored lights capable of mixing Red Blue and Green in real time. These are also programmable via DMX512 protocol, so they can be set up to run through pre-set lighting configurations.

My Stage
I’ve been using these slimpar 64 RGB LED lights from Chauvet. I’ve currently arranged five of these in a half ring around my stage all pointing inward.

I plan to eventually upgrade this to a more automatic solution. There are a lot of software packages designed for stage techs, in fact many concerts and night clubs use these systems. There is also computer soft/hardware solutions more designed for filmmakers and animators like this card for Kuper which fits in with a Kuper motion control system. Or the DDMX-S2 from Dragonframe, which allows stop-motion playback control for incremental programs. For now I’m controlling these via the Chauvet Obey-10 mixing board, which allows me to set sliders for each of the color channels of the lights independently.

What I’d really like is for it to be able to process a ref image or video and reproduce it automatically, or use a video clip and essentially ‘play it back’. It makes sense that through software I could take a reference image and sample the quadrant’s hue and value, and route that into a DMX controller, then those values could be used for control of the lights.

The Science
The theory sounds great, but first I have to figure out the physical lighting limitations of this rig and about LEDs in general. I’ve often been warned about color temperature in photography. The difference between Tungsten (3200K), Daylight (5700K), and Fluorescent (4000k). However in attempting to get a clear answer to the temperature of LEDs I went down a rabbit hole. It seems this all goes out the window the moment you start color mixing. It is completely variable, which means it could be anywhere. Added to this LEDs typically have a more limited spectrum, take a look at

these graphs;

LEDs are often assigned a CRI which as I understand it, is how well they can reproduce the sun’s light, and thus how balanced a color will appear when illuminated by it. The other thing I’ve discovered about LEDs is the pulse width of the lights themselves. They aren’t actually constantly on, the light blinks on and off at arate so fast we can’t detect it. For many lights this is slowed down for the dimming feature/effect. This pulse width modulation which our naked eye cannot detect, even at lower frequencies will be detected by the camera when set to a high shutter speed.

I found it really interesting in this test to see the way the light’s wavelengths interacted with the shutter speed of the camera. It seems there are ways to work around it, selecting a lower shutter speed for example, but I haven’t quite figured out the science for it. Looking over forums shooting around PWM seems to be an increasing problem, especially for venue/location photographers;
Shutterspeed and flickering hmis
PWM is not your friend
LED flicker on camera

Advanced Camera; shooting miniatures?

I had the unique opportunity through my work to attend a pair of lectures on using the Camera by UCLA Cinematography Professor, Bill McDonald. http://www.tft.ucla.edu/faculty/william-mcdonald/

We went over Photographic Lenses, Lens Focal Length, Camera Operating and Camera Movement, Depth of Field, Camera Movement, and Frame Composition, citing camera techniques and examples. It was a really great class and in addition to the examples he walked us through;  from films ranging from Goodfellas (zolly) to The Graduate (telephoto, wide angle, zoom lens) to Citizen Kane (depth of field) he demonstrated what he was explaining with a video camera plugged into a large monitor. So we were better able to see/visualize concepts; like the compression of space when using a telephoto lens and the increased Depth of Field  from a wide angle lens. He also talked through and demonstrated the narrative impact of camera movement  and how it unconsciously effects the experience of the film.

While I could go on about the class I feel like the most important thing to get written out right now is this question I’ve got;

How do focal length and Depth of Field relate to shooting miniatures to match with live action plates?

Specifically I want to be able to take plates of my N-scale miniatures and comp them with live action backgrounds and add actors. Do I need to convert the lens’ focal length, match the angle of view exactly?

Here is a test I made a year ago when I started thinking about these problems to see if the elements can be made to work; The tracking is terrible, color is wonky, and extraction isn’t very clean. I could tweek this endlessly to make those aspects work better, but ‘you can’t gold plate a turd’. I just need to study this and turn out more tests.

The things I’m most focused on are the issues of camera, Depth of Field, camera angle, and matching lens distortion/focal length;

The problems I’ve been running into have mostly been relating to focus, my 100mm Macro lens gives me this lovely slice of shallow Depth of Field, but that doesn’t really help me make the miniature look bigger, I just end up with this;
Out of necessity of my stage,  I’ve been shooting my miniatures with the 100mm macro, and my actor with a 14-40mm wide.


Here is what I need to know about DoF;

Shallow Depth of field;

-longer lens

-wider apeture

-closer to camera

Deep focus;

-Wide lens

-further from camera

-smaller apeture

(needs more light to accommodate for wider f-stop)

I’ve been reading Stu Maschwitz’ ‘DV Rebel’s Guide’, and recently finished the chapter on effects and matching cameras. and there is a great breakdown of a Rolling Stone video shot by David Fincher, There he explains that the camera doesn’t see anything differently between a small object or a large one apart from Depth of field.  Use the same lens (focal length). So the things to record are; distance from the model/actor, distance from the ground, and angle of the camera. Then either scale up or down the distance when you shoot the opposing part.

I suppose this makes sense, but more testing is going to be necessary.