Guest Lecture by Neil Sadwelkar

Written by DA students Satyajit Hajarnis, Dipankar Modak, Deep Basu and Nabamita Lahiri

Neil Sadwelkar is one of those personalities in contemporary Bollywood who plan the post production of your AV project and oversees its implementation. He is a post-production consultant, editor, ad filmmaker and documentary director rolled in one. At one time, or another, he headed Pixion, and then Prime Focus. Currently, he is more into technical consultancy, in today’s ultra hi-def digital filmmaking scenario. With a Masters Degree in Physics, and years of experience in technical maintenance in Nehru Planetarium, and later in the mainstream industry, he knows  the technical sides of any level of AV production. He backs that up with an aesthetic understanding and practice in filmmaking, doing many things at a time, unlike the specialists in Bollywood.

Neil Sadwelkar came to Digital Academy, on 2nd May, 2012, to take students to a three hours journey to the land of the digital cinema.

This fantastic journey started with a listing of digital cameras in the contemporary market. Modern digital camcorders came to the market in the late ‘80s. But, they became truly popular from the mid ‘90s only. Indian market swayed to the digital, in the new millennium. And in five years, the market literally flooded with cameras from different companies, for different purposes. To make the matter more complex, more than seventy five different recording formats started co-existing. Patent laws and proprietary formats made one specific media stream or file unreadable by another machine. That gave birth to many different workflows for the same goal.

Sony marketed the first prosumer digital video camera, in mid ‘90s. They named it DCR-VX1000. It was the first video camera to stream data through IEEE 1394 interface, commonly known as firewire. The compression given to the stream was the standardized DV; and the popular storage medium was the ubiquitous mini-DV tape, ¼wide.

 Sony DCR-VX1000

  Sony DCR-VX1000

Very Soon updated models, with wider facilities, came up. Canon produced XL-1, Sony marketed DSR-PD150, Panasonic with DVCPRO25, and so on.

After George Lucas developed CineAlta F900, the first HD camera in the world that could record 24 progressive frames per second, in collaboration with both Sony and Panasonic, the prosumer and TV market expected an improvement in their image acquisition too. JVC, Sony and Panasonic responded with GR-HD1, HVR-Z1 and the Panasonic AG-DVX100 cameras respectively.

Neil was at the forefront of this digital revolution, personally using all these models, and handling or designing the project workflow for each.

He talked about those years, and how he learnt to manage workflows for so diverse models such as later Sony AVCHD camcorders to Sony NEX series to Television broadcast cameras, to the growing need for using multipurpose DSLRs such Canon 5D MKIII.DSLR

Neil listed a dozen such cameras he worked with, through the new millennium years. He also talked about the new generation editing suites that came along, such as AVID Media Composer, and Apple FCP.

When someone from the students asked him which camera he prefers, his point was simple. He prefers none. Each has its own use, as per the requirement of the story and the clientele. A seamless, noiseless, very filmlike image sits in the spectator’s mind for a Karan Johar Romance. Red Epic would be perfect for that job, with its own pristine workflow. But, a quasi-docufiction like Stanley ka Dabba may be perfect with a Canon 7D, with its realistic, handheld motion images.

It may in fact look fake if a news documentary is shot with Arri Alexa, even with ProRes 4:2:2. Neil, who edited more than 300 TV commercials, does not judge an image by its gloss. He said, an image would serve its purpose best when it fits the existing mindset of the spectator, or supersedes it, but not attacks it.

In the second phase of his lecture, Neil Sadwelkar took specific examples of very high frame rate cameras such as Phantom or Weiscam, recording from 650 to 4000 FPS for super slow motion. Such cameras are useful not only for commercials or action sequences, but also in sports. Action replay in slo-mo, or judging whether it was an LBW, is perfectly possible now thanks to these cameras.

Extremely tiny cameras like GoPro, SonyPOV or Contour are in the market today for their extreme maneuverability and invisibility. Such lightweight, heavyduty cameras can easily be used under water (in simple water housing), or in the balloon above, connected to the chopper head, or to the helmet of the diver if necessary.

Footage from such diverse sources was never possible before digital revolution. These days, truly, imagination (or, the lack of it) is the only fence that limits an artist’s creativity. Implementation is just a matter of planned execution.

With this, Neil Sadwelkar lands up in the most important part of his talk – How to plan a shoot, and how the image is really acquired inside a digital camera.

Unlike a traditional camera, digital cameras capture images with a sensor. The sensor converts the incoming array of brightness variation to variation in electric voltage. Through electronic switching in the ICs, an electronic map of the same image is created. This image can then be processed inside the camera in various ways.

DA Film School

Noise reduction, Contrast enhancement and assigning the output to a particular space may be done in the camera. Particular Look Up Tables (LUT) can be saved from such settings, and they can be further applied to future images, or image streams.

However, that would give a permanent, or baked, look to the moving image. If the DP, or Director, later wants to change certain properties of the image, s/he would not be able to do so without losing visual information.

It was precisely for this reason the Raw image output made possible by Red One camera became so popular. In Red One, and its upgradations up to the contemporary Red Epic, powered by Dragon Sensor, offers a choice of outputting the raw electronic map of the original image, to the filmmaker. With maximum visual information in hand, the filmmaker can decide how to optimize the image for different viewing platforms – Cinema halls, Blu Ray discs, or satellite TV.


Raw Compressed on Flat Gamma     Baked with some LUT/ After Color Correction

In reply to a student’s question, Neil clarified, at this point, that the Raw data captured in the Arri Alexa or Red Epic camera is never output as Raw. Raw, being just an array of voltage fluctuations, is unreadable by the human brain. Hence, to show up as an image, Raw always has to undergo some compression.

Compressions are of two types – lossy and lossless. Some compressions such 3:1 or 5:1 contain so much visual information that practically they can be taken as Raw.

While high budget Hollywood movies are shot on 5:1 or 6:1 compression ratios, Indian blockbusters such as Bhag Mikha Bhag shot with an array of Epics used mostly 8:1 compression ratio. TV shows use 12:1 compression ratios or so.

From here, Neil Sadwelkar began the journey of the captured image to the end product. He showed how compressions are necessary for another reason. They are too big to be recorded to the memory card, in real time. This pushed the industry to invent external capture stream recorder, such as Aja Ki Pro or the Sony Axs-R5.


In the modern file based, tapeless systems, movie files are ultimately recorded in some specified formats.

While those like R3D RedRaw are machine-specific formats, similar to mainframe computer’s machine language (or, at best assembly language), those with compressions like Apple ProRes 4:2:2 and wrapped in a .mov extension are much more portable just like a compiled program.

And just like a compiled program, they are less efficient too.

However, efficiency, which translates directly to image quality in the filmmaker’s matters less for TV. While most current Indian TV shows have a bandwidth around 20 Mbps, a theatrical projection must be acquired at near 400 Mbps, or more. Precisely that is the bandwidth of Red Epic at 8:1 compression ratio.

This leads to dedicated memory recorder or hard disk in camera, like Redmag or Aluratek hard disk controller. Also new interfaces like Thunderbolt has come to exist, that finalyy replaced the age old Firewire technology.

Many cameras also provide comparatively cheaper HDMI or HD-SDI interface for comparatively uncompressed HD output.

In most of these cameras, sound can be recorded in comparatively uncompressed PCM 48KHz, at 24bit sampling block.

At this point, Neil Sadwelkar opened up the issue of D-Cinema and E-Cinema which he touched before. D-Cinema is the universally accepted standard for professional theatrical projection, while E-Cinema is the HDTV standard. While TV revolves around HD – 1920 x 1080 resolution, and ProRes 4:2:2 compression method; Motion Picture is set at a higher standard which starts from 2K – 2048 vertical lines, and can go upto 5K for all practical purposes.

However, what Neil did not mention here was that human eye is perhaps not made to look at more than 2K projection resolution, on an average. A debate on this, raised by Paul Wheeler, in the first years of the new millennium, is quite well known.

The last phase of Neil’s lecture concerned the quality of the acquired footage and the Digital Intermediate workflow handling that. Neil said, while footage from professional digital film cameras such as Sony SRW-900, or Cine Alta F-35 records on HDCAM SR tape for best output, Red One or Canon C300 records on CF card. Along with the Raw, proxy files of different compression ratio, in ProRes 4:2:2 method are generated automatically, in many of these cameras.

However, the workflow remains very similar and commonsensical, whatever the capture method or compression is.

If the moving image is captured in tape, in HDV or Varicam format, it has to be streamed to the editing machine through a firewire or thunderbolt interface. Normally, the footage undergoes a generation loss as it gets dumped and compressed into a machine readable format.

There are many different formats for different machines, or editing programs, such as Avid’s OMF or MXF.

For file based image acquisition, sometimes the footage has to be compressed so that the machine can handle it. Such compressions are very similar to proxies generated in some of the digital film cameras.

After editing, an EDL, or XML (For FCP X) is generated to open the project in a Color Correction suite, after conforming the XML date with original quality footage.

At this stage, CGI works are composited on live motion plates too, before the final Color Correction.

After the final CC, sound and music are added. Only at this stage, the project gets ready for an initial approval.

These days, almost all projects are going for a final encryption at ehe hands of companies like Scrabble and Qube, to be packaged as projection ready Digital Cinema Packages.

Neil Sadwelkar answered a few final questions related to Cinema projection, and how projectors handle the encrypted DCP, with a unique Keycode.

It was a marathon session covering almost a biography of Digital Cinema. However, there was little time in the end, for a detailed discussion on Digital Intermediate. Many students wanted to hear more about that highly glamorized workflow on which Neil is an expert. However, Neil satisfied their quest by showing that it is not possible to talk about all workflows, as they keep changing with the nature of the project, and finally all boil down to commonsense.

Shooting with Green Screen

The usage of visual effects has become an integral part of filmmaking process, and it is not limited to fantasy or sci-fi features anymore which call for the creation of complex non existing creatures and locations. Today, it is increasingly being used in regular productions to add smaller nuances to scenes, to extend live action sets, to add locations and objects which could have turned out expensive to shoot and so on. So as to say, it is not confined to creating big bulky Transformers alone but is also used for something as small as TV and mobile screen replacement. And that is the reason why all new age filmmakers need to be equipped with the fundamental knowledge and understanding of conducting a chroma or green screen shoot.


To chroma key is to composite two separate images into one. In video production, a blue or green screen, ideally made of non reflective cotton, is used behind the subject so that the green or blue color can be keyed out or made transparent and another background can replace it. Chroma keys are generally blue or green because these colors are furthest away from human skin tone. Green has become a more popular choice in this process as sensors in the latest digital cameras work better with green and green channel is the cleanest in them.

Certain basic things to remember: when setting up a green screen one needs to remove all possible wrinkles from it. Tightly pulling the ends of screen and positioning it with the help of a stand and tape or clamps helps in this process. Also, when storing, it is advisable to roll the screen.

Lighting on the background is also very important for getting a good key. Shadows and hot spots on the backdrop can prove very difficult to remove in the post. A three point or five point lighting set up done effectively to eliminate shadows is very essential in a chroma shoot. When it comes to camera, the white balance has to be properly set and ISO needs to be kept at the lowest. Higher ISO settings produce more noise in your image which makes keying difficult.


Also keep the aperture in the camera as wide as possible. You will get more depth of field with wider aperture, which in turn will blur the background making it easier to key out.

Other pointers to keep in mind: Do not let your subjects wear green, as this will lead to the subject’s clothes getting keyed out along with the background in post. Try not to have reflective clothing and jewelry and glass props on the set as these might lead to a green glow which is hard to edit out. Make your subject stand at least 4-6 feet away from the backdrop to avoid green spill, wherein the green color from the background spills on the subject’s skin, which leads to a green glow.

With all these points in mind, when chroma key is done properly it can open a whole new world of possibilities only limited by the filmmaker’s imagination.

The Magic of Prosthetic and Makeup Effects

Seen ‘Curious Case of Benjamin Button’ and wondered how did they manage to make Brad Pitt look over 60 years old? Or how did they manage to make those Ogres look so menacingly ugly in ‘The Lord of the Rings’? Well, Prosthetic and Makeup Effects is your answer.


A discipline of makeup, Prosthetic and Makeup effects utilizes various specialized materials and methods to create looks which cannot be achieved with the regular makeup techniques. This discipline requires painting and sculpting skills along with traditional makeup expertise. It is commonly used in film and TV productions.


Prosthetic and Makeup effects are used to make an actor look aged or young, to enhance/modify existing body part or add a completely new one, it can morph the face of an actor into that of an entirely different creature and much more. For an artist who is adequately skilled in this discipline sky is the limit when it comes to creating innovate yet believable look.

The process of applying this specialized form of makeup usually begins with creation of a mold or a cast. The makeup artist will sculpt a realistic model of the actors face or body part in question and this will serve as the base for the artist’s work. This process of creating a lifelike replica for prosthetic enhancement is called lifecasting and mold created is called a lifecast. These lifecasts are usually made of silicon rubber or prosthetic alginate. The materials used in creating this makeup have to be selected with care as they are worn by the actor on their skin, hence it has to also take the allergies of the actor into accord.

Once the lifecast is in place, the artist will start modifying its look depending on the character requirement. One can add wrinkles, wounds, skin aberrations, discoloration, deformity, specific texture etc. This is how in fantasy movies, the elves get their pointed ears or hobbits get their hairy large feet.

The process of applying prosthetic makeup can be a time consuming one if the character requirement is complex. For Benjamin Button, Brad Pitt had to undergo one of the most difficult and time-consuming makeup processes. His aged look was achieved with a blend of conventional visual effects coupled with makeup effects which at time took over 5 hours to complete.

One of the key factors in this art-form is making the makeup believable, and for this to happen the prosthetic appendages should blend seamlessly with the actor’s skin and body.


With the increasing use of visual effects in films, makeup artists work collaboratively with VFX technicians to get an enhanced and complete look for the characters. It works incredibly for the budget too, as prosthetics makeup can save huge chunk of money by doing effects that are within its sphere of possibility. Some of the best examples of this prosthetic and computer graphics marriage is the nose less look of Voldemort in the Harry Potter series, the zombie effects in the post apocalyptic television drama ‘Walking Dead’

VFX coupled with prosthetics can achieve looks makeup alone cannot.

Sound… Bigger than visuals for new age filmmakers

“Films are 50 percent visual and 50 percent sound. Sometimes sound even overplays the visual,” says David Lynch. And anybody who has seen the ‘Lynchian’ handiworks of this Academy Award winning director would fervently agree.


Sound in films encompasses the dialogues, background score, ambiance sounds, music etc and is a very crucial component of storytelling. In many films, even before the visual has registered in our brains, we have already been introduced to the premise of the scene by the sound. All of us have jumped off our seats with creaking of doors in horror films or pressed our ears for those unsought footsteps of the killer in those Slasher films. That is the impact of sound in films. The sound conditions and color


Even for a movie like Avatar, which will be known in the history of cinema as a visual phenomenon, sound design played a huge role in making the world of Pandora believable. One of the biggest challenges for the sound team on the movie was creating sound effects that would match the brilliant imagery created by Weta Digital. Every creature and environment sound that was originally created or recorded using real animals and environmental elements had to be manipulated so as to make them sound unique to the world of Pandora.

Though most of the ‘sound script writing’ happens in post production, the process of sound design begins even before the film goes on floor. A sound designer should read the script of his film with sound in mind and try to imagine the sounds as he goes through the scenes. This helps in developing the acoustic landscape of the film. Although the complete sound scape cannot be developed with the script with this approach, you still get a starting point to build on as you go ahead with the shoot.

This initial design takes it full form with tremendous inputs from the atmosphere of the visuals, which either enhances your original idea or force you take a completely new direction. Also important is capturing the highest quality audio while shooting on location. The best way to ensure your film sounds professional is to get the best quality sound from the source.


The latest sound recording, mixing and reproduction technology has made it possible for filmmakers to have a greater dynamic range of sound. Now we can create soundtracks with more precision and intricacy. There is more control and one can do more in the cutting room. But this has brought in with it the risk of being too loud, which in some cases borders on being vulgar. And with so much control over detail, you need to work even more accurately than before and make sure all sound bites are aligned to the design you are aiming for. The true beauty of sound comes out when it is used with subtlety in perfect balance with the visuals.

Unfortunately, viewers might not notice when the sound design is good. However, poor sound rarely goes unnoticed. The trick is to use the sound as a character in the film and not noise.

Methods and styles of Acting

Acting is an art of storytelling by gesture and body movement combined with conversation between enacting people. It can also be portrayed by characters in running audio visuals in a universal accepted format. There are several visual formats of storytelling such as theatre, Films, TV series and the most glorified among them is storytelling. It originated in India- The street theatres of India Nautanki’s. These groups displayed the various Indian folklores and great epics like Ramayana and Mahabharata in a semi dance musical form on street theatre moving from city to city, state to state all across India. Various Indian dance forms are also very famous internationally in portraying characters when enacting on stage. It’s all about gestural presentation of emotions in its own unique format accompanied by music and lighting to glorify the emotions. Be it about war or love story, an epic or village folklore, we had it all in India. But with modernization taking place, the theatre evolved with Shakespeare’s work laying the real foundation of the art of modern acting techniques. There were several other artists/actors who invented their own form of acting which was later so successful that today they comprise of the most glorified and sought after actors of the century.

The most successful of all was Constantin Stanislavski whose technique were influenced and developed by Lee Strasberg as a method known as Methodic Acting, classic example being Al Pacino..

Let us now see the various techniques of acting invented by various great artists…

Acting Techniques by Shakespeare.

Shakespeare has a body of work that is by far one of the most difficult acting styles to pull off successfully, it is also one of the most sort after and a serious thespians dream. Attempting to learn ‘Old English’ is comparable to learning a foreign language and remembering your lines is terribly difficult, especially lengthy monologues. However there are a few acting techniques for Shakespeare that will help ease the complexity of your role.

Firstly watch the film of the play, that you have been cast in and make extensive notes. Delivery, pronunciation and projection needs to be practiced frequently. Think about just how old the play is and how different society was in Shakespearean times. Think about the plays content, message and your characters purpose to the play and the feelings your character would be experiencing. Spend some time learning old English, knowing what you are saying will bring with it conviction rather than just reeling of a few memorized words meaninglessly. Practice in front of a mirror; check your posture, breathing and reaction stance.

Method Acting

Method acting involves adopting the lifestyle, habits or traits that are a reflection of the character you are trying to portray. Immersing yourself heavily into your characters mindset, will enable you to understand their motives or actions and gain a better understanding of how they feel. As a result you portray them with greater accuracy; many successful actors and actresses have adopted this particular method and as a result gained awards and praise for performances using method acting.


Constantin Stanislavski influenced the acting world so greatly that most modern acting techniques stem from Stanislavskian approaches. Stanislavski acting involves analyzing the script and segmenting it. Looking at what method a character resorts to, to overcome obstacles and reach their objectives. Which of the three path’s of action would they pursue, would they give up when faced with an obstacle, find a way to solve their problem or carry on regardless of their plight? His emphasis was on realism and accurate reflection of reality by using exercises like the ‘magic if’ what would you do if this happened to you and why do you think your character would act in this way.

Brecht Acting Techniques.

The Brechtian approach includes acting formats such as stereotypes, using placards, ensemble and montage. Bertolt Brecht was the father of epic theatre; his goal was to influence the audience into thinking about society and encouraging change within it. He placed great emphasis on gesture for the demonstration of emotion. He believed message was superior to character, the story and situation itself to be more important than the personal challenges within the situation. Brechtian theatre demonstrates and allows for various acting styles to co-exist.

Artaud Acting Techniques.

Artaud thought very differently to Brecht, his thinking placed heavy emphasis on invoking deep routed feelings through acting. He believed the theatre was about action and the element of surprise. His theatre of cruelty approach, of which he is better associated with, takes acting to the subconscious level, using painful memories and strong feelings to invoke strong emotion. Antonin Artaud thought less of words and more of profound impact. Whereas Brecht wanted the audience to go out and change society Artaud wanted them shaken to their soul and to look within and make Changes within themselves.

Meisner Technique.

Sanford Meisner’s technique is predominately placed on self, circumstances and affect on and reaction to others. Repetitive dialogues are used as an exercise; these enable actors to play on action and reaction, depending on how the line is delivered at that moment. It’s about considering the characters objective, reading tone and body language. Meisner’s cause and effect type teaching helps the actor to attune themselves to the community of the performance, who is friend and who is foe. What is the catalyst of change and how as a character you deal with change and the chain reaction that follows it?

Methods and Styles of Script Writing

SCRIPT WRITING is the art of writing audio visuals in a universally accepted word format, even before starting to shoot (any form of audio visual media product). This art is more glorified for movies, however it is also used for other audio visual products such as TV serials, Ads, Documentaries etc.

Emergence of scriptwriting started way back in Greek cultures where theatrical drama was very predominant for regular entertainment of the royals. This was further amplified by Shakespeare. His famous plays ROMEO and JULIET, HAMLET and OTHELLO are considered to be work of masterpieces of all times which laid the foundation for today’s art of story writing. With the emergence of TV and films it was possible for entrepreneurs and film makers to make running audio visuals on films captured by camera and lighting technology and showed to the audience as a complete story or film.

Once this was made possible the next attempt was to make films of different styles incorporating the already known favourite stories. These were already known to people in the form of legends, plays and novels. Thus modern scriptwriting came into existence. With the emergence of modern technology and techniques it was possible to make films based on fiction which only one could imagine before. Bringing extra terrestrials and alien worlds to earthlings was now possible and was an instant success story. Thus genre started playing an important role in scriptwriting as choice was made available to the people and audiences’ choice started playing a major role in selection of movies to be made. Nonetheless the basic foundation of all films or audio visual remains the same..STORY TELLING..

Scriptwriting consists of three main components collectively

1.STORY (the whole story)
2.SCREENPLAY (story distributed in scenes by scene description)
3.DIALOGUES (the conversation in the scenes)

Stating these components in place one knows exactly where to begin from. A story can be adapted from a novel, an incident, a legend, a wild imagination, a personal experience or any source that can initiate excitement or probability of the audience. It starts with an idea or what we generally refer to as concept. The story idea or the main incidence which initiates or carries the story ahead is called as the plot.

Being a story telling medium it has got to go through the ups and downs of incidents, got to have obstacles, fights, wars, romance, evil and good people etc. So here we start giving them conventional technical names such as Characters is for people, type of story is genre and most importantly the happenings/incidents are scenes as it’s a running format.

Technically, scriptwriting is divided into three phases which the story comprises of called as the 3 ACT STRUCTURE..

Act 1 consists of the conflict set up (30 mins). ACT 2 (60 mins) consists of the protagonist facing the problem and ACT 3 (30 mins) consists of the protagonist culminating the problem. Totally, 120 mins or 2 hrs.(there are many more minute details involved).

The writer should start of with a clear mind, acknowledging who the story is about, as the story has to be told in that perspective. Every story will have a start and an end, so the character should go through a series of incidents after which he will change as a person. The logic behind is that when a person goes through certain experiences in his/her life, he changes as a person. Simple references being educated people becoming gangsters, good and law abiding police officers becoming tainted etc. The duration of the story is limited but the span of time however can be moderated as per the story, revolving around a life time, few years, few months, one day etc. The character who the story is about is called the protagonist ,the villain or the person who plays the opposing character to the protagonist is called the antagonist. The fight of the protagonist in the story is called as the conflict. It is this conflict that gives the story more weight-age, more legitimacy, more endurance, as more severe the conflict more effort the protagonist has to put to win and more laudable is the effort WHICH DEFINES HEROISM !!

Sometimes there are stories which have the protagonist face more than one major conflict so there are multiple plots. Any such additional conflict that the story might have is called the sub plot or sub plots. This is the basic three act structure in brief. There are intense technicalities involved in Screen Writing which requires great effort and continuous writing to better oneself.
If the writer himself is the director then the screenplay might change in technicality. Here he goes an extra mile to enhance the camera angles and other pointers. At the end it all ultimately depends on technician to technician, project by project.


Film-making is the process or medium of storytelling in the form of visual images supported by sounds (dialogues and music). It is totally teamwork where experts from various techniques come together as a team to make a story narrative. The captain of this ship is the DIRECTOR.

A director like his own personality or individuality has his own way of bringing forth his narrative on screen which can be understood by the fact that no two people can make exactly the same film with the same story! Period. It is because every director has his own way of imagination and creativity to mold the story in a chain of scenes to make a continuous visual narrative. Each Director has his own way of bringing forth the film.

There are many different starting points for telling a story visually. One can start with an idea, a causality (otherwise known as the plot), a character, or even with a location and its properties. Hence there are directors who put more effort on the storyline, on the musicality of the storytelling process (ie, editing, with its inherent rhythm) or on the striking audio-visual imagery. There are many directors who pitch primarily on their actors’ name and capabilities. Their films depend on the start cast. And there are a few directors who see cinema as an organic art where each of these individual elements fit into an effective design to complement one another. In the end, there are almost as many theories guiding a film production as the number of directors.

Apart from the methodical perspective, a director also has a unique way of being known as a director, on basis of his films and his Style. A director well known for his work or greatly appreciated by the masses has a signature style of his narrative which tells a lot about him and his vision. For example, Alfred Hitchcock in known for his thrillers and suspense movies, Steven Spielberg for his war and Science Fiction films, Wolfgang Peterson for his action flicks, Francis Ford Copolla for his Drama , Martin Scorsese for his concepts of guilt and redemption, machismo, modern crime, and violence, Woody Allen for his drama and slapstick comedy.

In India we had V.Shantaram for his social concepts, Shakti Samanta for his romantic films, Raj Kapoor for his intense love stories, Manmohan Desai for his Family reunion cum action, Ramesh Sippy for his intense relationship conflict cum action, Yash Chopra for his films portraying great personal conflict and romantic in recent times, Subhash Ghai for his action and musical, Sanjay Leela Bhansali for his dark (controversial) intense love stories and Ram Gopal Varma for his Mafia genre, Mahesh Bhatt for his romantic and Prakash Jha for his social concepts.

When one chooses to be a film maker he needs to have a clear vision of what he intends to do as it’s a great responsibility towards the society and the audience which will be reacting to it. There has to be a social messaging, a connectivity and entertainment at the same time. It is an experience we undergo while watching a film and it might impact in the long run. This art is much more beyond than business, money, fame and recognition which are all secondary.

The primary OBJECTIVE is contention, satisfaction in the ability to bring the thought in the form of a full length running narrative. If this is followed the ethics will remain intact. Else we all would flow blindfolded in the heavy current of business and money making habit and destroy the essence of the feel and the spirit of this pure and the most touching art of storytelling.


The process of editing film digitally is constantly evolving, but the basic concept remains the same-you start and end on film, with only the creative part of the editing process changing. Following is a simplified work flow outlining the basic process.

Although this work flow appears more complicated than the traditional editing method, many of the steps can be automated. For most filmmakers, the benefits of being able to edit digitally easily offset any added procedures.
Several parts of this process are the same as for the traditional method-as mentioned earlier, it is only the middle part of the film editing process that is affected by editing digitally.

Stage 1: Shooting the Film and Recording the Sound
Audio is always recorded separately from the film, on a separate sound recorder. This is known as shooting dual system sound. While shooting the film, you need to include a way to synchronize the sound to the picture. The most common method is to use a clapper board (also called a slate or sticks) at the beginning of each take. There are a number of other methods you can use, but the general idea is to have a single cue that is both audible and visible (you can see what caused the noise).

Stage 2: Developing the Film
The developed film is known as the original camera negative. This negative will eventually be used to create the final movie and must be handled with extreme care to avoid scratching or contaminating it. The negative is used to create a video transfer (and typically a work print, as with the traditional method) and then put aside until the negative is conformed.

Stage 3: Transferring the Film to Video
The first step in converting the film to a format suitable for use by Final Cut Pro is to transfer it to video, usually using a telecine. Telecines are devices that scan each film frame onto a charge-coupled device (CCD) to convert the film frames to video frames. Although the video that the telecine outputs is typically not used for anything besides determining edit points, it’s a good idea to make the transfer quality as high as possible. If you decide against making work prints, this may be your only chance to determine if there are undesirable elements (such as microphone booms and shadows) in each take before committing to them. The video output should have the film’s key number, the video time code, and the production audio time code burned to each frame.

The actual videotape format used for the transfer is not all that important, as long as it uses reliable time code and you will later be able to capture the video and audio digitally on the computer prior to editing. An exception is if you intend to use the video transfer to also create an edited video version of the project, perhaps for a video trailer. This requires two tapes to be made at the transfer-one that is high quality and without window burn, and another that has window burn.

It is strongly recommended that the audio be synced to the video and recorded onto the tape along with the video during the telecine process. There are also methods you can use to sync the audio after the telecine process is complete-the important thing is to be able to simultaneously capture both the video and its synchronized audio with Final Cut Pro.

Stage 4: Creating a Cinema Tools Database
The key to using Cinema Tools is its database. The database is similar to the traditional code book used by filmmakers. It contains information about all elements involved in a project, including film key numbers, video and audio time code, and the actual clip files used by Final Cut Pro. Depending on your situation, the database may contain a record for each take used in the edit or may contain single records for each film roll. The film-to-video transfer process provides a log file that Cinema Tools can import as the basis of its database. It is this database that Cinema Tools uses to match your Final Cut Pro edits back to the film’s key numbers while generating the cut list.

There is no requirement that the database be created before the video and audio are captured, or even before they are edited. The only real requirement is that it must be created before a cut list can be exported. The advantage of creating the database before capturing the video and audio is that you can then use it to create batch capture lists, allowing Final Cut Pro to capture the clips. The database can also be updated and modified as you edit.

Stage 5: Capturing the Video and Audio
The video created during the telecine process must be captured as a digital file that can be edited with Final Cut Pro. The way you do this depends on the tape format used for the telecine transfer and the capabilities of your computer. You need to use a third-party capture card to capture files from a Betacam SP or Digital Betacam tape machine. If you are using a DVCAM source, you can import directly via Fire Wire. To take advantage of the batch capture capability of Final Cut Pro, you should use a frame-accurate, device-controllable source.

As opposed to the captured video, which is never actually used in the final movie, the edited audio can be used. You may decide to capture the audio at a high quality and export the edited audio as an Open Media Framework (OMF) file that can be imported at a Digital Audio Workstation (DAW) for finishing. Another approach is to capture the audio at a low quality and, when finished editing, export an audio EDL that can be used by an audio post-production facility, where the production audio can be captured and processed at a very high quality.

Stage 6: Processing the Video and Audio Clips
Depending on how you are using Cinema Tools, the captured clips can be linked to the Cinema Tools database. They can also be processed, using the Cinema Tools Reverse Telecine and Conform features, to ensure compatibility with the Final Cut Pro editing timebase. For example, the Cinema Tools Reverse Telecine feature allows you to remove the extra frames added when transferring film to NTSC video using the 3:2 pull-down process.

Stage 7: Editing the Video and Audio
You can now edit the project using Final Cut Pro. For the most part, you edit your film project the same as any video project. If you captured the audio separately from the video, you can synchronize the video and audio in Final Cut Pro.
Any effects you use, such as dissolves, wipes, speed changes, or titles, are not used directly by the film. These must be created on film at a facility specializing in film optical.

It can be helpful for the negative cutter if you output a videotape of the final project edit. Although the cut list provides all the information required to match the film to the video edit, it helps to visually see the cuts.

Stage 8: Exporting the Film Lists

After you’ve finished editing, you export a film list that can contain a variety of film-related lists, including the cut list, which the negative cutter uses to match the original camera negative to the edited video. Additional lists can also be generated, such as a duplicate list, which indicates when any source material is used more than once.

Stage 9: Creating a Test Cut on a Workprint
Before the original camera negative is conformed, it is strongly suggested that you conform a work print to the cut list to make sure the cut list is accurate (some negative cutters insist on having a conformed work print to work from). There are a number of things that can cause inaccuracies in a cut list:

Damaged or misread key numbers entered during the telecine transfer process

Incorrect time code values

Time code errors introduced during the capture process

With NTSC video, 3:2 pull-down problems

In addition to verifying the cut list, other issues, such as the pacing of a scene, are often hard to get a feel for until you see the film projected on a large screen. This also gives you a chance to ensure that the selected shots do not have unexpected problems.
If your production process involves work print screenings and modifications, you can also export a change list that describes what needs to be done to a work print to make it match a new version of the sequence edited in Final Cut Pro.

Stage 10: Conforming the Negative
The negative cutter uses the cut list, the edited work print, and the edited video (if available) as a guide to make edits to the original camera negative. Because there is only one negative, it is crucial that no mistakes are made at this point. As opposed to the cutting and splicing methods used when working with the work print, the cutting and splicing methods used for conforming the negative destroy frames on each end of the edit. This makes extending an edit virtually impossible and is one of the reasons you must be absolutely sure of your edit points before beginning the conform process.

Stage 11: Finishing the Audio
You usually rough-cut the audio while editing the video (stage 7); the audio is typically finished while the film is being conformed. As mentioned in stage 5, you can use an exported OMF version of the Final Cut Pro edited audio or export an audio EDL and recapture the production audio (using the original sound rolls) at a DAW. Finishing the audio is where you perform the final sound mix, including cleaning up dialogue issues and adding sound effects, backgrounds, and music.

Stage 12: Creating the Answer and Release Prints
After the original camera negative has been conformed and the audio finalized, you can have an answer print created. This print is used for the final color timing, where the color balance and exposure for each shot are adjusted to ensure the shots all work well together. You may need to create several answer prints before you are happy with the results. Once you are satisfied with the answer print, the final release print is made.


Film editing is part of the creative post-production (after shooting) process of filmmaking. It involves the selection and combining of shots into sequences, and ultimately creating a finished motion picture which is an art of visual storytelling. Film editing is the only art that is unique to cinema, separating film-making from other art forms that preceded it (such as  photography, theater, dance, writing, and directing), although there are close parallels to the editing process in other art forms like poetry or novel writing. Film editing is often referred to as the “invisible art because when it is well-practiced, the viewer can become so engaged that he or she is not even aware of the editor’s work which gives importance to awards for recognition.

Edwin S. Porter is generally thought to be the American filmmaker who first put film editing to use. Porter worked as an electrician before joining the film laboratory of Thomas Alva Edison in the late 1890s. Early films by Thomas Edison (whose company invented a motion camera and projector) and others were short films that were one long, static, locked-down shot. Motion in the shot was all that was necessary to amuse an audience, so the first films simply showed activity such as traffic moving on a city street. There was no story and no editing. Each film ran as long as there was film in the camera. When Edison’s motion picture studio wanted to increase the length of the short films, Edison came to Porter. Porter made the breakthrough film Life of an American Fireman in 1903. The film was among the first that had a plot, action, and even a close up of a hand pulling a fire alarm.
Other films were to follow. Porter’s ground-breaking film, The Great Train Robbery is still shown in film schools today as an example of early editing form. It was produced in 1903 and was one of the first examples of dynamic, action editing – piecing together scenes shot at different times and places and for emotional impact unavailable in a static long shot. Being one of the first film hyphenates (film director, editor and engineer) Porter also invented and utilized some of the very first (albeit primitive) special effects such as double exposures, miniatures and split-screens.

Continuity editing is the predominant style of film editing and video editing in the post-production process of film-making of narrative films and television programs. The purpose of continuity editing is to smooth over the inherent discontinuity of the editing process and to establish a logical coherence between shots .In most films, logical coherence is achieved by cutting to continuity, which emphasizes smooth transition of time and space. Technically, continuity is the responsibility of the script supervisor and film director, who are together responsible for preserving continuity and preventing errors from take to take and shot to shot. The script supervisor, who sits next to the director during shooting, keeps the physical continuity of the edit in mind as shots are set up. He is the editor’s watchman. If shots are taken out of sequence, as is often the case, he will be alert to make sure that that beer glass is in the appropriate state. The editor utilizes the script supervisor’s notes during post-production to log and keep track of the vast amounts of footage and takes ,that a director might shoot..

There are several different ways to edit video and each method has its pros and cons. Although most editors opt for digital non-linear editing for most projects, it makes sense to have an understanding of how each method works.

This page provides a very brief overview of each method — we will cover them in more detail in other tutorials.

Film Splicing
Technically this isn’t video editing, it is film editing. But it is worth a mention as it was the first way to edit moving pictures and conceptually it forms the basis of all video editing.
Traditionally, film is edited by cutting sections of the film and rearranging or discarding them. The process is very straightforward and mechanical. In theory a film could be edited with a pair of scissors and some splicing tape, although in reality a splicing machine is the only practical solution. A splicing machine allows film footage to be lined up and held in place while it is cut or spliced together.

Tape to Tape (Linear)
Linear editing was the original method of editing electronic video tapes, before editing computers became available in the 1990s. Although it is no longer the preferred option for most serious work, it still has a place and remains the better option in some cases. It is likely that linear editing will be a useful skill for a long time to come.

In linear editing, video is selectively copied from one tape to another. It requires at least two video machines connected together — one acts as the source and the other is the recorder. The basic procedure is quite simple:

1.Place the video to be edited in the source machine and a blank tape in the recorder.
2.Press play on the source machine and record on the recorder.

The idea is to record only those parts of the source tape you want to keep. In this way desired footage is copied in the correct order from the original tape to a new tape. The new tape becomes the edited version.
This method of editing is called “linear” because it must be done in a linear fashion; that is, starting with the first shot and working through to the last shot. If the editor changes their mind or notices a mistake, it is almost impossible to go back and re-edit an earlier part of the video. However, with a little practice, linear editing is relatively simple and trouble-free.

Digital/Computer (Non-linear)
In this method, video footage is recorded (captured) onto a computer hard drive and then edited using specialized software. Once the editing is complete, the finished product is recorded back to tape or optical disk.

Non-linear editing has many significant advantages over linear editing. Most notably, it is a very flexible method which allows you to make changes to any part of the video at any time. This is why it’s called “non-linear” — because you don’t have to edit in a linear fashion.
One of the most difficult aspects of non-linear digital video is the array of hardware and software options available. There are also several common video standards which are incompatible with each other, and setting up a robust editing system can be a challenge.
The effort is worth it. Although non-linear editing is more difficult to learn than linear, once you have mastered the basics you will be able to do much more, much faster.

Live Editing
In some situations multiple cameras and other video sources are routed through a central mixing console and edited in real time. Live television coverage is an example of live editing.
Live editing is a fairly specialist topic and won’t concern most people.

Dawn of Indian Cinema

World’s first films were soundless, and without story. Movies were shot without background music and dialects which today forms the basis of each movie. Things continued this way till 1926 when Hollywood studio Warner Bros. introduced the Vitaphone system, producing short films of live entertainment acts and public figures with recorded sound effects and orchestral scores to some of its major features.During late 1927, Warner’s released The Jazz Singer, which was mostly silent but contained what is generally regarded as the first synchronized dialogue (and singing) in a feature film; but this process was actually accomplished first by Charles Taze Russell in 1914 with the lengthy film The Photo-Drama of Creation. This drama consisted of picture slides and moving pictures synchronized with phonograph records of talks and music.. By the end of 1929, Hollywood was almost all-talkie, with several competing sound systems (soon to be standardized). Total changeover was slightly slower in the rest of the world, principally for economic reasons. Cultural reasons were also a factor in countries like China and Japan, where silent’s co- existed successfully with sound well into the 1930s, indeed producing what would be some of the most revered classics in those countries, like Wu Yonggang’s The Goddess (China, 1934) and Yasujiro Ozu’s Was Born, But (Japan, 1932). But even in Japan, a figure such as the Benshi, the live narrator who was a major part of Japanese silent cinema, found his acting career was ending.

India was not too far behind. After the screening of lumiere movies in 1895 at London, they came to Bombay in 1896. At first all short films were made by Hiralal Sen, The Prince of Persia in 1898 being the first. The first Indian movie released in India was ‘Shree Pundalik’ by Dadasaheb Torne on 18 May 1912 at ‘Coronation Cinematograph’, Mumbai.

The first full-length motion picture in India was produced by Dadasaheb Phalke. Dadasaheb being the pioneer of Indian film industry a scholar on India’s languages and culture brought together elements from Sanskrit epics to produce his Raja Harishchandra (1913), a silent film in Marathi. This was the emergence of cinema in India, hence the DADASAHEB PHALKE
AWARD has great importance among the film fraternity!!

Ardeshir Irani released Alam Ara which was the first Indian talkie, on 14 March 1931. H.M. Reddy, produced and directed Bhakta Prahlad (Telugu), released on Sept 15, 1931 and Kalidas (Tamil) released on Oct 31, 1931. Kalidas was produced by Ardeshir Irani and directed by H.M. Reddy. These two films are south India’s first talking films to have a theatrical release.

As sound technology advanced, the 1930s saw the rise of music in Indian cinema with musicals such as Indra Sabha and Devi Devyani marking the beginning of song-and-dance in India’s films. Studios emerged across major cities such as Chennai, Kolkata, and Mumbai as film making became an established craft by 1935, exemplified by the success of Devdas..!! Success of Devdas proved that the characterization will play an important role in future of Indian films!! As those actors were just singers or performers and wilfully came to the movies it was still not a gentlemen’s furore and was at no cost considered a stage for the women. The film makers understood that, for great portrayal of character and connection with the audience, characterization was imminent which would be glorified only by a technique called Acting. People were about to see what they had only read in the ‘upannyas’ or the great literary works of the great writers and literateurs, the true dawn of Indian Cinema!!

%d bloggers like this: