Guest Lecture by Neil Sadwelkar

Written by DA students Satyajit Hajarnis, Dipankar Modak, Deep Basu and Nabamita Lahiri

Neil Sadwelkar is one of those personalities in contemporary Bollywood who plan the post production of your AV project and oversees its implementation. He is a post-production consultant, editor, ad filmmaker and documentary director rolled in one. At one time, or another, he headed Pixion, and then Prime Focus. Currently, he is more into technical consultancy, in today’s ultra hi-def digital filmmaking scenario. With a Masters Degree in Physics, and years of experience in technical maintenance in Nehru Planetarium, and later in the mainstream industry, he knows  the technical sides of any level of AV production. He backs that up with an aesthetic understanding and practice in filmmaking, doing many things at a time, unlike the specialists in Bollywood.

Neil Sadwelkar came to Digital Academy, on 2nd May, 2012, to take students to a three hours journey to the land of the digital cinema.

This fantastic journey started with a listing of digital cameras in the contemporary market. Modern digital camcorders came to the market in the late ‘80s. But, they became truly popular from the mid ‘90s only. Indian market swayed to the digital, in the new millennium. And in five years, the market literally flooded with cameras from different companies, for different purposes. To make the matter more complex, more than seventy five different recording formats started co-existing. Patent laws and proprietary formats made one specific media stream or file unreadable by another machine. That gave birth to many different workflows for the same goal.

Sony marketed the first prosumer digital video camera, in mid ‘90s. They named it DCR-VX1000. It was the first video camera to stream data through IEEE 1394 interface, commonly known as firewire. The compression given to the stream was the standardized DV; and the popular storage medium was the ubiquitous mini-DV tape, ¼wide.

 Sony DCR-VX1000

  Sony DCR-VX1000

Very Soon updated models, with wider facilities, came up. Canon produced XL-1, Sony marketed DSR-PD150, Panasonic with DVCPRO25, and so on.

After George Lucas developed CineAlta F900, the first HD camera in the world that could record 24 progressive frames per second, in collaboration with both Sony and Panasonic, the prosumer and TV market expected an improvement in their image acquisition too. JVC, Sony and Panasonic responded with GR-HD1, HVR-Z1 and the Panasonic AG-DVX100 cameras respectively.

Neil was at the forefront of this digital revolution, personally using all these models, and handling or designing the project workflow for each.

He talked about those years, and how he learnt to manage workflows for so diverse models such as later Sony AVCHD camcorders to Sony NEX series to Television broadcast cameras, to the growing need for using multipurpose DSLRs such Canon 5D MKIII.DSLR

Neil listed a dozen such cameras he worked with, through the new millennium years. He also talked about the new generation editing suites that came along, such as AVID Media Composer, and Apple FCP.

When someone from the students asked him which camera he prefers, his point was simple. He prefers none. Each has its own use, as per the requirement of the story and the clientele. A seamless, noiseless, very filmlike image sits in the spectator’s mind for a Karan Johar Romance. Red Epic would be perfect for that job, with its own pristine workflow. But, a quasi-docufiction like Stanley ka Dabba may be perfect with a Canon 7D, with its realistic, handheld motion images.

It may in fact look fake if a news documentary is shot with Arri Alexa, even with ProRes 4:2:2. Neil, who edited more than 300 TV commercials, does not judge an image by its gloss. He said, an image would serve its purpose best when it fits the existing mindset of the spectator, or supersedes it, but not attacks it.

In the second phase of his lecture, Neil Sadwelkar took specific examples of very high frame rate cameras such as Phantom or Weiscam, recording from 650 to 4000 FPS for super slow motion. Such cameras are useful not only for commercials or action sequences, but also in sports. Action replay in slo-mo, or judging whether it was an LBW, is perfectly possible now thanks to these cameras.

Extremely tiny cameras like GoPro, SonyPOV or Contour are in the market today for their extreme maneuverability and invisibility. Such lightweight, heavyduty cameras can easily be used under water (in simple water housing), or in the balloon above, connected to the chopper head, or to the helmet of the diver if necessary.

Footage from such diverse sources was never possible before digital revolution. These days, truly, imagination (or, the lack of it) is the only fence that limits an artist’s creativity. Implementation is just a matter of planned execution.

With this, Neil Sadwelkar lands up in the most important part of his talk – How to plan a shoot, and how the image is really acquired inside a digital camera.

Unlike a traditional camera, digital cameras capture images with a sensor. The sensor converts the incoming array of brightness variation to variation in electric voltage. Through electronic switching in the ICs, an electronic map of the same image is created. This image can then be processed inside the camera in various ways.

DA Film School

Noise reduction, Contrast enhancement and assigning the output to a particular space may be done in the camera. Particular Look Up Tables (LUT) can be saved from such settings, and they can be further applied to future images, or image streams.

However, that would give a permanent, or baked, look to the moving image. If the DP, or Director, later wants to change certain properties of the image, s/he would not be able to do so without losing visual information.

It was precisely for this reason the Raw image output made possible by Red One camera became so popular. In Red One, and its upgradations up to the contemporary Red Epic, powered by Dragon Sensor, offers a choice of outputting the raw electronic map of the original image, to the filmmaker. With maximum visual information in hand, the filmmaker can decide how to optimize the image for different viewing platforms – Cinema halls, Blu Ray discs, or satellite TV.


Raw Compressed on Flat Gamma     Baked with some LUT/ After Color Correction

In reply to a student’s question, Neil clarified, at this point, that the Raw data captured in the Arri Alexa or Red Epic camera is never output as Raw. Raw, being just an array of voltage fluctuations, is unreadable by the human brain. Hence, to show up as an image, Raw always has to undergo some compression.

Compressions are of two types – lossy and lossless. Some compressions such 3:1 or 5:1 contain so much visual information that practically they can be taken as Raw.

While high budget Hollywood movies are shot on 5:1 or 6:1 compression ratios, Indian blockbusters such as Bhag Mikha Bhag shot with an array of Epics used mostly 8:1 compression ratio. TV shows use 12:1 compression ratios or so.

From here, Neil Sadwelkar began the journey of the captured image to the end product. He showed how compressions are necessary for another reason. They are too big to be recorded to the memory card, in real time. This pushed the industry to invent external capture stream recorder, such as Aja Ki Pro or the Sony Axs-R5.


In the modern file based, tapeless systems, movie files are ultimately recorded in some specified formats.

While those like R3D RedRaw are machine-specific formats, similar to mainframe computer’s machine language (or, at best assembly language), those with compressions like Apple ProRes 4:2:2 and wrapped in a .mov extension are much more portable just like a compiled program.

And just like a compiled program, they are less efficient too.

However, efficiency, which translates directly to image quality in the filmmaker’s matters less for TV. While most current Indian TV shows have a bandwidth around 20 Mbps, a theatrical projection must be acquired at near 400 Mbps, or more. Precisely that is the bandwidth of Red Epic at 8:1 compression ratio.

This leads to dedicated memory recorder or hard disk in camera, like Redmag or Aluratek hard disk controller. Also new interfaces like Thunderbolt has come to exist, that finalyy replaced the age old Firewire technology.

Many cameras also provide comparatively cheaper HDMI or HD-SDI interface for comparatively uncompressed HD output.

In most of these cameras, sound can be recorded in comparatively uncompressed PCM 48KHz, at 24bit sampling block.

At this point, Neil Sadwelkar opened up the issue of D-Cinema and E-Cinema which he touched before. D-Cinema is the universally accepted standard for professional theatrical projection, while E-Cinema is the HDTV standard. While TV revolves around HD – 1920 x 1080 resolution, and ProRes 4:2:2 compression method; Motion Picture is set at a higher standard which starts from 2K – 2048 vertical lines, and can go upto 5K for all practical purposes.

However, what Neil did not mention here was that human eye is perhaps not made to look at more than 2K projection resolution, on an average. A debate on this, raised by Paul Wheeler, in the first years of the new millennium, is quite well known.

The last phase of Neil’s lecture concerned the quality of the acquired footage and the Digital Intermediate workflow handling that. Neil said, while footage from professional digital film cameras such as Sony SRW-900, or Cine Alta F-35 records on HDCAM SR tape for best output, Red One or Canon C300 records on CF card. Along with the Raw, proxy files of different compression ratio, in ProRes 4:2:2 method are generated automatically, in many of these cameras.

However, the workflow remains very similar and commonsensical, whatever the capture method or compression is.

If the moving image is captured in tape, in HDV or Varicam format, it has to be streamed to the editing machine through a firewire or thunderbolt interface. Normally, the footage undergoes a generation loss as it gets dumped and compressed into a machine readable format.

There are many different formats for different machines, or editing programs, such as Avid’s OMF or MXF.

For file based image acquisition, sometimes the footage has to be compressed so that the machine can handle it. Such compressions are very similar to proxies generated in some of the digital film cameras.

After editing, an EDL, or XML (For FCP X) is generated to open the project in a Color Correction suite, after conforming the XML date with original quality footage.

At this stage, CGI works are composited on live motion plates too, before the final Color Correction.

After the final CC, sound and music are added. Only at this stage, the project gets ready for an initial approval.

These days, almost all projects are going for a final encryption at ehe hands of companies like Scrabble and Qube, to be packaged as projection ready Digital Cinema Packages.

Neil Sadwelkar answered a few final questions related to Cinema projection, and how projectors handle the encrypted DCP, with a unique Keycode.

It was a marathon session covering almost a biography of Digital Cinema. However, there was little time in the end, for a detailed discussion on Digital Intermediate. Many students wanted to hear more about that highly glamorized workflow on which Neil is an expert. However, Neil satisfied their quest by showing that it is not possible to talk about all workflows, as they keep changing with the nature of the project, and finally all boil down to commonsense.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: