I’ve been encountering difficulties with the Rift SDK – my project wouldn’t play in the editor and my builds would crash. Since its new tech, getting help is a bit of an uphill battle since everyone is still figuring it out.
Out of the blue, Valve announces their own SDK plugin for Unity (with UE4 support next week) and I thought I’d give it a shot. Well, its a win / lose – I can play in editor (and in VR!) but my builds only play a black screen (but it plays without crashing!)
Also – I saw on reddit a generous VO actor offering to record some dialog for free, and I pitched my idea and she accepted it! So I’ll have a few disparaging lines from a snooty secretary type to mess around with. Which means I’ll have to explore the .MHX2 option for Makehuman export, as the regular .MHX gives me a single jawbone to try and lip sync with…
I finally overcame the problem of animating Makehuman characters and getting them into Unity. More or less the problem seemed to resolve itself when I upgraded to Unity 5. And now my token booth clerk walks, talks and cusses – thanks to dialogue I recorded at 6 AM when my voice isn’t exactly spry.
Only now, I can’t quite get the Rift integration to work – there are a bunch of coding references I don’t get – and one unhelpful person over at /r/oculusdev suggested I ‘learn to code’ to fix my problem. Yeah pal, as soon as I see a programmer start drawing proportioned, balanced and properly shaded figures or demonstrate how they can make texture maps with alpha masks…
Anyway – neat side note: The texture glitched on my guy, and he has a neat cyberpunk-ish feel about him, if I ever do something in a sci-fi setting, I will definitely re-use him.
Modelling out the token booth for my VR experience “BUM.”
While, I don’t consider myself that great at modelling, I feel I get the idea across and am learning every time I make something new. Blender has proven to be a bit challenging, since I’m used to modelling with polys, and it constantly wants to convert things to n-gons. Still, I can see the merit in loop cuts and the knife tool – it’ll just take a bit to learn and implement effectively. It’s also helpful to be converting my old Truespace models over to Blender, and how my skills have improved.
What I do truly enjoy is the texture painting – a long time ago I got a copy of 3D Paint, and while it was very basic in its toolset, it did open my eyes to a new way of textureing 3D objects. I like to get my basic UV layout and color areas established in Blender, then pull my texture maps into Photoshop to work out details, then back into Blender for shadows, dirt and other bits of wear.
While my overall look is to emulate the classic NYC booth, I did need to make adjustments, since a seated character wouldn’t be visible to the player (since the player is sitting on the floor of the subway station) and eliminated some extra details such as a card swiper, as to avoid ‘dating’ my booth.
I was originally inspired to do this by the dad who did VFX shots of his kids – the original was simply comping lightsabers on the kids and a photo of the remote from Ep. 4. moved in Premiere.
Then I discovered camera motion tracking in Blender.
It was rough, as this shot was very shaky, jerked around a lot and had no similar markers from the beginning to the end. I managed to bust it up into smaller clips and track those – but even then, they were atrocious to work with. I’m particularly happy with the pit and vaporators at the beginning of the 2nd clip, I like that Blender added in motion blur to match the shot – now if I could figure out how to make the shadows more intense…
Sometimes the best way to continue a project is to scrap it all and start over. I have been working sporadically on a animation set to the tune “UFO’s, Big Rigs, & BBQ” by Mojo Nixon, featuring a redneck truck driver, space alien and – of course, BBQ.
I was able to get decent results creating a humanoid figure using Sculptris (an amazing free program) and was delighted to import him into Blender, get him rigged and make him move. Problems arose when I discovered my mesh wasn’t optimal for animating, had some proportions too far out of whack and had limited ranges of motion.
It also didn’t help that my reference material (me, filming myself acting out what I wanted my character to do) was poorly acted, not thought out and too herky-jerky. It’s what I get for guzzling a ton of coffee and filming myself…
Anyway – back to the drawing board. I’m resculpting my character to better animate and instead of filming my mouth lip-syncing the song and projecting it onto my mesh, I’m adding a real mouth and will use shape keys to lip sync him to the song – more complicated, but closer to what real animators do.
Not certain if this one will pan out, but I recently came across someone looking to commission a portrait. Here is the thumbnail comp – if its a go, I need to have a final painting done and delivered by the 25th of January. Whew!
Ever since I learned that our main library branch has a ‘MakerSpace’ – that is: a 3D printer, 3D scanner, laser cutter, green screen and more – I’ve been dying to go try it out. I’ve been itching to scan a Santa doll that my mum made years ago, rig it in Blender and animate it.
Sadly – the beard and hair didn’t translate too well and there are a lot of gaping holes. It also didn’t help that the doll is over 24″ tall and the scanning platform was too small to allow the doll to stand. I did get some decent sections of the face and may use that to sculpt out the rest. My other option is to use photogrammetry (taking a ton of photos and having the computer stitch them together into a 3D model)