Cinematographic Learning – Part 4

Any kind of image or impression is visible through contrast. And any contrast is primarily created by light.

Just as a painter needs some functional knowledge of the chemical and physical properties of his paints, a photographer needs to know the basics of his paint – the light. The more he knows how light interacts with nature and with his eyes, the more he can have the guidance of light to create beautiful images.

It is all cinematographers’ dream to be one with the light, to see the basic fabric of light. The moment he sees it, nature reveals herself to him. All great Masters of cinematography knew light as their greatest kin in the world. Keeping this in mind, it is no more surprising to know that the Maestro Satyajit Ray scolded his Cinematographer, another Maestro, Subrata Mitra, for taking too much time to light up the set. He plainly said on Mitra’s face, “You are light’s slave!”

Ray was wrong. Cinematographers are not slaves to light, but kins. They seek oneness with light to materialize the sparks on the screen that they have in mind.

It is mandatory for a genuine cinematographer to know as much about light as possible, exactly the way someone tries to understand his best friend. The formal study on light is known as optics.

The first thing to know about light may come as a little surprising. Light shows everything, but it is invisible itself. Light shows up in its path of travel. When the path of travel is smoke or dust in the air, or clouds, streaks or beam can be seen. However, when it travels through clean air, if someone stands perpendicular to the rays and look at the supposed beam in the air, he sees nothing.

We can see the light beam as the source, or as coming through a glass. But eyes perpendicular to the beam she cannot see the light. It is complete darkness for her.


Perhaps the most important feature is light always travels in straight line. This is not difficult to understand when one keeps the phenomenon of shadow in mind. When a light bulb is lit up, light radiates in full sphere all around it.

All the individual light rays travel in their respective paths, in straight lines. However, if some object is placed in a light ray’s path, any one of three phenomena may be observed depending on the nature of the blocking object.

If light passes through the blocking object, mostly unaffected, the object is called transparent. Glass, water, air and many other minerals and synthetic materials are transparent.

If light is blocked, and cannot pass through, the object is known as opaque. Later, it can be seen that opaque objects are of different types too.

There are objects, like smoked glass, cellophane paper or plastic bottles, which block some rays and let others pass, but spreading them more outwards. These are called translucent objects. In cinematography, translucent objects play a major role, as can be seen later in this article.

When light is blocked by an opaque object, the block is delineated by the shape of the object, otherwise known as the shadow.

Hence, shadow is absence of light, and just like any other contrast it is visible only when there is a lack of light in some part in an otherwise lit up area. Sounds paradoxical, but to see shadow prominently, light is needed.

Even the most transparent object in the nature casts a shadow as nothing is purely transparent. But, more opaque an object is, the shadow is darker (ie, there is no light in the shadow area. The shadow area is totally dark unlike the image above.)

What happens to the light rays that cannot pass through? Again any one of three things happens. They can be reflected off the blocking surface of the opaque object in a very regular manner, just like mirror. They can be reflected off the surface in an irregular manner that spreads them out at the time of reflection, just like any visible object other than mirror. Or, they can be totally absorbed.

Just like any ideal physical phenomenon, pure mirror reflection, irregular reflection or absorption never happens. There is always a mix of everything, with one of the three dominating.

One thing is sure, and no cinematographer can afford to ever forget that – whatever happens, an individual light ray always travels in a straight line, and can never turn around a blocking object unlike water.

So, it can be seen when light rays meet something in their path they are absorbed, or transmitted through the object or reflected off the object surface. When light rays are mostly absorbed, the object looks dark. In other words, its details cannot be seen.

When the object surface reflects light rays, a very interesting natural rule is followed.

As it can be seen in the diagram above, a light ray always gets reflected in the same angle in which it falls on a surface. Nothing in the universe causes an exception to this. This is a fundamental property of light, and does not depend on the source, color or brightness of the light, or on any property of the reflecting surface.

However, this does not say how a beam of light would behave while getting reflected on an uneven surface.

For all cinematographic purposes, light can be considered as a cylindrical pack of individual light rays – a beam – in common parlance.

Each ray in the light beam reflects off in the angle of incidence (ie, the angle in which they hit the object surface). But, if different areas in the surface are themselves at angles (ie, if the surface is not a plane), even two parallel light rays in a hitting beam, will get reflected in angles to each other, depending on the angles between two points on the reflecting surface.

In another word, light rays will fan out, or be concentrated.

Normally, visible surfaces fan out light rays very irregularly. The reflection they perform is known as diffuse reflection, as opposed to the very regular mirror reflection.

Sometimes, cinematographers use a technical term – specular reflection. That is same as the mirror reflection (Latin word for mirror is speculum.)

In a mirror reflection the light source is seen while the reflecting surface is not noticeable. But, a diffuse reflection shows the reflecting surface, and not the light source. Human eyes see most objects because of diffuse reflection only.

Even in a diffuse reflection, a significant portion of the beam can cause mirror reflection at a particular angle. So, the light source is partially reflected on a wall, or on a polished door. Such a reflection is known as a hotspot, and is always to be avoided in any shoot. Any composition with a hotspot looks clumsy.

When the light is allowed by the transparent or translucent object, light rays are always bent. Just like mirror reflection, two parallel rays remain parallel after bending, because they bend in the same angle, if the transparent object is a plane (or flat).Sometimes the hotspot can be made less prominent by making the glossy area matte, with the help of spray.

However, if the transparent object is curved, like part of a transparent sphere, light rays bend more as one goes up or down from the center.

The bending can be towards one another, or away from one another. Accordingly, light rays converge to a focus, or diverge away.

Anyone can sense that such curved transparent objects are known as lens. They magnify or reduce an image forming through them.

Bending of light caused by any transparent object is known as refraction. The bending can be regular or accelerated in both the ways. However, in the case of lenses, the acceleration itself is regular.

What happens when the acceleration is not regular? That means, light rays cross one another randomly after transmission through the object and spread. Translucent materials produce such an effect.

As the light rays are more spread out through a translucent material, in effect it is same as increasing the size of the light source. That in turn makes the light create a softer shadow.

This is why cinematographers use different translucent materials before light sources, and at different distances from sources, to create different levels of shadows. In nature also it is a very regular phenomenon. On a clear day, the sun casts very hard, well-defined, fine edged shadows. But, as the sky turns cloudy the shadows also turn to fuzzier, to a practically shadowless situation when the sky is completely overcast.

Besides reflection and refraction, another important, but less noticeable behavioral property of light is its polarization.

It is necessary to go through a few fundamental facts about light before one tries to understand polarization.

What is light? What is it made of? A beam of light can be visualized as a pack of rays, emitted from the source and passing through space to fall somewhere.

But what are those individual rays made of? Particles? If they are particles, overlapping of two different particles should always create more energy. After all that is what one expects when one canon ball strikes another to give more energy to the last one in the row for a greater impact.

However, such interference of light creates bands of darker and brighter lights. Brighter light is a commonsense expectation if light is mixed. But darker? That makes no sense.

Such an interference pattern makes perfect sense if light is visualized as series of waves. When opposite waves cancel each other out, the effect visible is darkness.

However, light visualized as wave cannot satisfactorily explain why light fails to turn around opaque objects.

Human beings have been studying light for the last two millennia, or more. But, only in the last century, a break through to this question came.

Light is dual in nature. When it travels in space it is mostly like wave. But, the moment it interacts with matter it behaves like particle.

No one really knows exactly how light is. But the approximate knowledge gathered so far is interesting to take a lifetime of study.

Coming back to polarization, this property of light deals with its wave nature. Light is the visible spectrum of electromagnetic radiation. The full spectrum contains cosmic rays, gamma rays, x-rays and ultra-violet rays on the more energetic side, and infra-red rays (associated with heat waves), mobile waves and radio waves of different types on the other.

In between, a miniscule portion, seen by the human eye, is called light.

A light wave can be visualized like a sine wave, and it is multi-dimensional in reality unlike a sea-wave. For the practical purpose, one can imagine a light wave bloated all around like a sphere.

In diagrammatic representation like above, the distance between the two highest energy level (both in the positive and negative, or up and down directions) is known as wavelength. In physical manifestation, this denotes the light’s color. How high is the energy level from a zero position, is known as the wave’s amplitude. That denotes the light’s brightness.

From the diagram above, it can be seen the wavelength is different for red, green and blue light each. The shorter the wave length the more energy is packed in that light.

Light rays travel in space in such waves. However, the wave is not a unidimensional, linear one as mentioned above. Besides, each wave has two wave-components perpendicular to each other.

There are many natural and synthetic materials, like cloud, glass and plastic, that reflect or transmit one wave blocking the other completely or partially. This phenomenon can be used very creatively for cinematographic purpose, when an already polarized light is blocked by putting another polarizer material on the camera lens, thus making only a portion of the cinematographic frame darker.

 Image without polarizer             Image with polarizer

There are many other interesting sides to light. As said earlier, a lifetime is not enough just to churn through the superficial layers of light’s behavior.

A cinematographer keeps studying about light and in the course knows more about his art and himself, throughout his life.

Cinematographic Learning – Part 3

“… the Director of Photography is a money manager, who with the assistance of the crew, must deliver daily a product that is aesthetically exciting, technically exact, and on budget.

“And-oh, – yes- he or she must, in each expensive minute of every working day, contribute to the art of the film”

–          David L. Quaid, ASC

Having a controlled check on composition and exposure is important, but all such theoretical knowledge fall flat if they cannot be applied through camera. Whatever the make, it is basically a light-tight box only, with inbuilt controls for sharp image (lens), exposure (aperture and shutter), and film or sensor in the back.

These are the basic parts a camera must have; but with more sophisticated and user friendly cameras lining up every day, in the market, a detailed knowledge of each popular make and the differences among them is necessary.

When motion picture became a major industry in the west, just after World War II, efficient, automatic cameras boomed in the market. For a long time, cameras were hand-cranked. That means, the operator use to run the show by manually running 10- 16 frames per second before the projector gate. Gradually, film advance inside camera was replaced by machine. New companies like Arri, Mitchell, Aaton, Éclair and Panavision appeared in the market.

Cameras soon became standardized, so that one who knows how to use a camera of a particular make can easily run another with minimum extra input. As the camera design is based on common sense, this common ground among completely different cameras made life easy for the cinematographers.

But, how was the basic camera design? How was that still the same – even after hundred years since the camera started?

A look into the basic lay-out of any motion picture camera can clarify this.

No other camera is better than ARRI IIC for this purpose. Granddaddy of most cameras functional today, this light-weight, small SLR camera exhibits the basic design which made all ARRI cameras popular. Stanley Kubrick, a great Arri Aficionado throughout his life , used this great portable camera to some extent in all his films.

ARRI IIC (IIC being the model number, while ARRI is the company that manufactures these cameras – named after the two founders – Arnold and Richter – Ar-Ri) is a modular camera. That means, its components can be separated and other fitting components can be locked in their places.

Basically the camera system consists of the camera body with the gate, pressure plate and pull down claw where the film is threaded; the view finder; the camera motor which runs the film through the camera, lens and magazine where both the unexposed and the exposed filmstock stay.

A detailed look into each part is necessary.

Just like any other SLR camera, ARRI IIC body is housed with a mirror-shutter and a prism on the side to reflect light to the view-finder ground-glass.

A brief explanation is needed now.

Any camera is a sophistication over the basic pin-hole type that creates an inverted fully color image on the inside wall of the dark box (ie, the camera). Light rays that pass through a small pin-hole opposite the image-wall create that inverted image.

A Modern SLR camera, is a pin-hole camera in function only. However, it offers an external viewer some facilities to see the image on the inside wall.

It uses a mirror inclined at an angle (ideally 45°) to the image-wall.

In the diagram above, it can be seen how the light rays come through the lens (the pin-hole in modern cameras stays within the lens, and is known as aperture) and fall on the mirror.

The mirror, in this case, is inclined at 45° to the Film plane. When the mirror is in the path of the light rays, obviously it blocks the light from falling on the film. The mirror reflects the light upwards instead.

A mirror image of the inverted pin-hole image is made on the translucent screen above. That screen is known as the ground glass.

Light passes through the ground glass to get reflected in the sides of the prism above, and finally reach the eye-piece. An external viewer (ie, the photographer) can see the image looking through the view-finder.

That image is the same image that the pin-hole (or, the lens) camera makes. That is the same image without any modification that gets imprinted on the film when the same light rays strike it.

This simple system of viewing ensures the photographer of the framing. He knows he will get exactly what he sees.

As the photographer sees the reflection through a single lens, the camera system is known as SLR (Single Lens Reflex.)

Motion Picture cameras use a similar technology. However, here the mirror does not flip upwards, as it does in a still photography SLR camera. No physical mirror can go through so many ups and downs at such a high speed (film speed is 24 frames per second normally. So, such a mirror would have to flip up 24 times a second, and in 1/48th of a second each time.)

Motion picture cameras solve this problem with a rotating mirror.

In a still photography camera, the shutter stays behind the mirror and both flip up or down together, in sync, when the film is exposed.

In a motion picture camera, however the shutter and the mirror are in the same rotating disc.

Arri came up with this brilliant technology in 1931, anticipating the very high demand of such a technology, in future.

How does this technology work?

As can be seen in the diagram above, a mirror inclined at 45° angle, in a way to similar to the still photography camera’s mirror, rotates in the gate where the film comes to be exposed. When the mirror is in front of the gate, the film runs so that the next frame can come to the gate. When the mirror goes out of the gate, downwards, light rays hit the film frame straight and image is recorded. As the mirror keeps rotating, it comes back before the gate, shutting the light off but reflecting it to the view-finder. Now, the cameraman can see the image but film is blanked out. As film is blanked out it can move and the next frame can come to the gate for the next exposure.

Twenty four such exposures are made at the camera gate, every second. The mirror itself works like a shutter.

In Arri IIC, the shutter angle – how much area in the circular disc is covered by the mirror and how much is left open – is variable from 15° through 165°. The shutter speed effectively changes as the shutter (or mirror) opening is changed. Shutter speed for each exposure (ie, each frame) is 1/52th for a normal shutter closure of 165°.

More shutter opening makes shots blurry. Less opening makes them crispy sharp. For that reason fast actions normally require very small opening.

In many modern cameras, the shutter opening (technically known as shutter angle) can be changed when the shot is on (when the camera is running.) An electronically operable shutter is installed in such cameras.

In Arri IIC, and many other cameras, that is not possible. Shutter angle can be manually changed, and only when the camera is powered off.

In all Arri and other SLR motion picture cameras the continuous flicker at the view-finder is a part of life for the cinematographer. That means he sees what the lens sees, but that also means he cannot see what is being recorded on the film. For that split second, when a frame is exposed, the mirror goes down and the view-finder is blanked.

Such a split second occurs twenty four times every second, during filming.

Aspect ratio, information about the frame and many other things are displayed through the view finder, for the cinematographer’s use. They are basically marks on the ground glass. The ground glass can be taken out, changed, or even fitted with LEDs.

In the camera body, besides the mirror, the other two important things are the pressure plate and the pull-down claw.

When the film is running, it cannot run on its own. Something has to move the film through the gate. The pull-down claw does that.

When the film is at rest, it gets exposed. Any kind of motion, or vibration in the film blurs the image. Hence, something is needed to keep the film steadily static. In other words, the film needs a solid support. The pressure plate is just a metal plate, that gives such support to the film. The film adheres to the pressure plate literally, until the pull down goes back, goes up and moves forward to get into a perforation and goes down with the perforation so that the next frame can be exposed.

The way pull-down claw moves can be compared to a train’s wheel on the track.

Arri makes their own lenses in collaboration with Carl Zeiss. The two major series of cine lenses are Ultra Prime and Master Prime.  There are three types of popular mounts – PL (Positive Lock) mount, C-Mount and Bayonet Mount. Most camera operators prefer PL Mount for the ease of their fitting.
Lens is a major attachment to the camera body. There are a range of different lenses that can fit onto the lens mount of a camera. Some companies like Panavision are very exclusive in the choice of lenses. They use mounts where only their lens can be fit.
The motor can be used as handgrip for handheld camera operation. This is how many operative cameramen used this model for documentary type shots. Like most motion picture cameras, Arri IIC uses a DC battery of 16.8 V normally. For faster frame rate shoot (which means slow motion in the projected footage), or time lapse cinematography, 24 V batteries can be used.

The next important part of the camera is the motor itself. It fixes to the camera body on its underside.  It is a variable speed motor which operates through a rheostat (variable resistance.)There are buttons on the camera body to maneuver the shutter rotation (inching knob), change the frame rate (tachometer) and to close the viewfinder from inside.

Arri-Zeiss Ultra Prime Series

Another indispensable part in a motion picture camera is its view-finder. Basically, the view finder is a lens system that magnifies the ground glass image without distortion for the cinematographer’s eyes.

There are two sections in the viewfinder tube. The main section connects to the ground glass chamber through a door on the camera. The eye-piece section can be removed from the main view-finder. There is eye-power adjustment in the eye-piece for a cameraman, to work without glasses.

A CCD (or any other type of sensor) video camera can be fixed inside the view-finder system, so that a video image of the running shot can be viewed on a monitor, at the time of shoot. This appendage is called video assist.

Between the take off and take up sprocket of the magazine, runs the film to be shot. This running length is constant, and its length is maintained in a loop. For Arri magazines, this loop is 52 perforations (ie, 13 frames) in length. If this loop is of a different size, the film can break due to running stress.

However, the most handled part of the camera system is the magazine. Unexposed film stock stays here. Through a sprocket slit film runs out of the magazine, threaded inside the camera gate, to get exposed and taken up back into the other spool of the magazine.

Film magazines come in different capacities, like 200 ft, 400 ft and 1000 ft, normally.

There are different camera systems, from different countries and manufacturers. Some DPs prefer non-Arri systems such as Panavision or Aaton (from the USA and France respectively.) There are excellent camera systems from Mitchell, Éclair and Bolex. There are many more companies that make SLR and other systems.

However, all motion picture cameras share the basic designs implemented by Arri company in Germany, when they planned their IIC model.

A cinematographer has to update himself continuously of current camera designs and operative techniques. Reading product literature and shooting tests with a new camera in the market is a cinematographer’s routine work.

Cinematographic Learning part 2

Composition is a vast subject to study in any visual art. It is very similar in function to the choice and use of words in literature, their contextual, semantic and semiotic relationships and the grammar of a language. That way, in visual media, everything can ultimately be reduced to Composition. However, for a better understanding of the potential of the medium, the study is structured in a different way.

While composition talks about what to show and why to show, there are steps to reach the composition in mind – how to show. That way, a big issue in Cinematographic composition is Exposure. Probably the most important factor in Cinematography, exposure dictates if the image can be seen at all.

The most important property of any image is contrast. Just imagine drawing a beautiful portrait with a black marker pen on a blackboard. Theoretically, the image is there. It should be. But, the image is useless if it is so embedded in the background canvas (in this case, the blackboard) that it cannot be seen.

To understand the concept of contrast in a better way, it is first required to understand the concept of the negative and the positive spaces.

Consider the following image, for example.


If one considers the solid black portion as the image, one would see a vase, on a white background. However, the same person would see two white faces looking at each other on a black background, if he chooses to see like that.

It is important to notice that none can see both the images at the same time. There is always a switch from the one image to the other, with an interval in between.

The image one sees is known as Positive Space, while the background and any other supporting space in the frame as Negative Space.


Normally, none looks at the background with interest. Viewer’s attention is limited to the foreground mostly. When the attention switches from foreground to background, the negative and positive spaces swap their positions.

There are many different ways of making an area in the frame positive (as opposed to the supporting area, or negative space.) Exposure is foremost among them.

Exposure means the level of brightness. To define it more correctly, Exposure is the amount of light used to create a particular gray tone in an image.

This is something known as Grayscale.

Notice how the pure white on the leftmost gradually darkens to become pure black on the rightmost.

Each variation of the white’s brightness is visually interpreted as a kind of gray. These variations are known as different tones.

Tones can be interpreted as visual manifestations of different exposures.


It is easy to understand if the whole image inside a frame has the same tone throughout it can never be seen, as it has no contrast. Image

For the image to be seen, and to look interesting, there should be different tones in different areas in the frame.

For example, the painting below.


It is the famous Guernica, the study of War, which Picasso painted as a reaction to the bombardment of the Basque townGuernica, in 1937, during the Spanish Civil War.

It is interesting to note that the mural painting was created in grayscale, and not in color. Whatever the reason for that choice would have been, it was necessary to play with different gray tones to produce a play in contrast that could capture the equivalent play in emotions when a city was destroyed, a culture marred, humanity strangled.

Had Picasso used fewer gray tones, the painting would have lost much of its appeal. In more lay words, it would have been flat.

Tonal Contrast is necessary to show an image.  More subtly, proper contrast is necessary to extract the right kind of emotion from an image.

White less bright is gray, white least bright is black.  In photographic world, brightness at some area in the image means the amount light trapped to create that area. So, tonal contrast is guided by a choice of variable brightness across the frame – different amount of light trapped in different parts in the image.

Amount of light is known as Exposure, in photographic term.

To have different exposures in different areas in the image, it is necessary to measure the exposures.

Exposure is measured just like any other physical quantity like weight, distance or temperature.

There are a lot of units to measure exposure in different ways. However, in photography, two major units are used around the world. When photographers measure amount of light coming from a light source (such as, sun, moon, the household bulb, torchlight or candle flame), they use either of these two units.

British and the Americans use a unit called Lux, while most of the remaining world uses a related unit known as Foot-Candle. 1 foot-candle is approximately 10.76 lux.


The equipment used for measuring exposure is commonly known as light meter. Companies such as Sekonic and Minolta make devices to measure light in different ways.

However, for most practical purposes, exposure is measured as a relation among the three photographic controls any camera has.

Camera is basically a light tight box. That is how it got its unique name – camera. It comes from the Arabic word Kamraah, meaning a room. The origin of the derivation goes back to ancient civilizations in Greece, India, Egypt and china. However, in more comparatively modern world, it started with the Renaissance painters who used Camera obscura to paint models and landscapes with a ‘photographic’ realism.

Basically such a camera obscura was a similar light tight boxe – but such a big box that we can call it a room. It was totally light-tight except for a small pin-hole on one side, through which a light beam can enter the room.

Everyone knows images are made of light. Human vision, the images forming on the retina of the eye, is completely guided by light. Keep no light, and no image will be formed.

However, light can come from an object in two different ways – either the object emits light, or it can reflect light originally coming from some other radiating source.

Whatever maybe the case, man sees the world as light from the world strikes his eyes.

If the light is focused, and not too much scattered in different directions, man can create a facsimile image of the world on a surface too.

Precisely, the renaissance painters were doing that in the 15th and the 16th Centuries.


All modern cameras are built on the same concept. Basically, all of them are just light tight boxes, fitted with a few extra things to control exposure and sharpness in the image.


In modern cameras, a lens is fitted before the pin-hole (in fact, the pin-hole is incorporated in the lens) for selective focus and an overall sharper image.


There are three main mechanisms to control exposure in a modern camera. One is the size of the pin-hole. It is called aperture, in modern photographic parlance. Aperture is the Latin word for hole.

The other important control is the shutter. While the size of the aperture determines how much light is passing through at a single moment, the shutter speed determines for how long the light would pass.

Where the film stays in the camera, it is called the gate (as light passes through the gate to fall on the film, and make an image.)


The shutter shuts light off from entering gate. Only when the shutter is out, light can fall on the film, and the image can be recorded for that duration, on the film.

Most shutters for handy SLR and non-SLR cameras (and DSLRs too) look like Venetian blinds. In film cameras, the shutters are like opaque rotating discs.

Shutter before the gate

The duration of light and the amount of light at a single moment, together determine the total amount of light falling on the film.

So, how bright a certain white wall looks depend on this total amount, controlled both by the size of the aperture and the duration of shutter out.

Shutter speed is calculated in fractions of a second, like 1/48”, for the duration the shutter is out (and film is getting exposed to light.)

However, as previously seen, everything in the frame can not be of similar brightness. For the image to be visible it needs tonal contrast in different areas in the frame.

For the image to look interesting, the tonal contrast has to proper in its distribution.

For example, look at the image below.


This is an image of maximum contrast – pure black and pure white. There is no other tone in between. The image looks like a Chinese ink line-art, or a solarized image, as it is known in the Photoshop®.

However, for the image to look more natural, a number of gray tones is needed in between pure black and pure white.


It is the creative choice of the Cinematographer, keeping the mood of the particular scene in mind, how many different tones of gray (or in other words, how many different exposures) he wants to use.

The third major control mechanism for exposure, in a camera, is the film’s sensitivity to light.

Normal exposure is the amount of light that extracts maximum details from an object surface to show in the image. Obviously, only few areas in the image will be normally exposed – usually areas in the positive space.

Under and over exposed areas surrounding the subject (these areas are normally in the foreground and the background only, maintaining a clear separation from the subject space) create the negative space, supporting the subject.


In the famous photograph by Ansel Adams above, the area in b, the mountain tops and most of the river are seen in normal exposure. However, the sky in area a lacks much visual details as it is too bright (Over-exposed). While the area c on the left of the river is so dark that again details is lost (underexposed.)

Any amount of light can fall on the film (or sensor, in case of a digital camera.) But, the film can be highly receptive to the light or less receptive.

In case of a low-sensitive film (it is called slow film), it needs more light to fall on its surface to show normal exposure.

Film’s sensitivity is measured in ISO units – like ISO 100, ISO 250 or ISO 320. The higher the value, the more sensitive it would be to light.

The subject of exposure is highly interesting, and at the same time truly important in determining the aesthetic quality of the image.

The cinematographer takes a considerable time in controlling the tones in his images through exposure checking. More than anything else, a proper knowledge of exposure handling sets the class of the cinematographer as an artist.

%d bloggers like this: