This past week I’ve been trying to get the perfect photogrammetry model of a head and face. The reason being is that I can more accurately animate faces if the mesh is more realistically contoured. This way, once we have captured participants’ full bodies with photogrammetry, we would also capture a more close up model of their head and stitch the two models together in Maya. This would allow us to create a fully animated person with moving eyes and facial expressions.
So far, the results of my work have been less than ideal. I assumed the process was as simple as the typical take-a-full-rotation-of-an-object method, but it may be more complicated than that.
First I recorded just the front of Dan’s head, and the pictures only composited to create a model of half his face:
However, the contouring is great and looks easy enough to animate with blendshapes. But I want a full model of a head, so I did the entire rotation:
This model surprisingly came out to be even worse than the first. Although it is more than half of Dan’s face, most of the top of his head is gone and a huge chunk of his brow and forehead is missing. His face is also distorted, which I assumed was a result of his glasses being left on his face. I did one last model of my roommate’s head by rotating completely around her in even lighting, and the result…
Not so great. Her face is nearly two dimensional and there’s absolutely no roundness to her head.
I’m not quite sure where to go from here. I’ll have to try capturing someone’s head by spiraling from the top of their head to their chin. I know the appeal of photogrammetry is its eerie, subtle distortions, but there must be a way to get a tidier model of a person’s face.
This week it was my job to work with a mesh of Zakon’s head to create facial expressions. Although his face was much better defined than previous attempts at capturing a model of someone’s head, there were still many issues that kept me from creating a clear expression in Maya.
As you can see, the model of Alex’s head is seriously flawed. It’s an improvement from the models I had created, but the shape of his face is poorly defined and the mesh itself needs to be smoother. There are also so many faces that it’s difficult for me to see the changes I’m making while i work with the mesh.
The best way for me to fix this issue is simply to create expressions in Mudbox. In Mudbox I can manipulate the face easily without having to see all the individual meshes, and export the object into an OBJ. This allows me to create multiple models of Alex’s face with different expressions, which is preferable for animating with blend shapes. Here are some expressions I did this week:
Alex looking angry
Obviously these need some editing. And hopefully I’ll have more meshes to work with over break. But at this point I’ve come up with a good method for facial animation, and editing the faces in mudbox is quite practical. Over spring break I’ll be working on this, and then I’ll move on to animating the eyes.
Side now: as far as my account with DAQRI, they still haven’t accepted my application, so I don’t have my Lost Thing ready to present. I emailed them and hopefully they will respond with my account status.
Since my focus in this class is working with blend shapes, my personal project idea is to use a photogrammetry model of a greco-roman style bust, and do a very short film of its changing expressions in various kinds of light. Here are a couple of videos that I’ve thought about as inspiration:
I don’t have many more details on the project other than that. I’d rather let the content evolve as I experiment with it in Maya.
As for the Lost Things project, I’m in the process of rigging the faces of models from the google drive and hopefully, after I paint weights, I’ll be able to at least sync up their expressions to what the person says in the interview.
I took a photogrammetry model of a small drawing bust I have at home to use for my project and it turned out perfectly. I then manipulated the vertices to create different expressions and then used blend shapes to animate. This is the result:
After reviewing the blend shape animation I created above, I realize that the changing facial expressions look a little too deliberate because there’s not alternating motion. I’m not sure if there’s a way to create an organized system of blend shapes so that I can change different parts of the face at different times. For example, if I wanted to create a surprised face, I would want to lift the eyebrows before showing any adjustments to the mouth. Considering I would need to use a different route shape for all of the different expressions, one for each feature change, this could be a headache. I’m thinking I will take a more experimental approach and create some distorted blend shapes to help transition between each expression.
Although I didn’t make a lot of exciting progress this week, I spent my time creating different blend shapes for the statue and considering the environment. I decided to take an HDRI image and place is as a texture on a sphere. I adjusted the sphere so that the background had a distorted look to it. This is the image I used:
And this is the result of applying it as a texture to a sphere:
After finally finishing the project, I have a 30 second animated short which features the statue on a column in the middle of a museum. In the end, I used about 20 different blend shapes to create the effects seen in the short. This is a screen shot I took just before I adjusted the expression blend shapes. As you can see, I also rigged the bust to add a little more flexibility.
This project doesn’t have any sort of deep meaning to it. It’s an exercise with blend shapes, and I thought that my drawing bust would be an interesting model to use, so I created an environment around it. I suppose the museum-after-hours concept comes from the childlike idea that objects which resemble living things, like dolls, statues, stuffed animals, etc, come to life when we aren’t looking.