Photography is much more than pointing a camera and pressing a button. This is something you realize very soon after getting your hands on a serious shooter. Or maybe you stumbled upon your smartphone’s manual mode and asked yourself what all these settings are. There is a lot to learn in this endless pit of terms, settings, skills, and techniques.
Becoming proficient in photography will take more than reading this post, but you can use this as a general guide to get you started. In here you will find the most important terms and concepts revolving the art of photography. You can also bookmark this page and come back to it to refresh your knowledge.
Photography is much more than pointing a camera and pressing a button.
The exposure triangle
This is the first thing you need to learn if you want to dive into the world of serious photography. The exposure triangle consists of 3 settings you need to keep in check in order to properly expose an image. These are aperture, shutter speed, and ISO. Let’s touch a bit on each one of them.
The exposure triangle is the first thing you need to learn when getting serious with photography.
Aperture is defined by the size of the opening with which light can enter the camera. Aperture is measured in f-stops, which is a ratio of the focal length divided by the opening size. The smaller the f-stop the wider the opening. An f/1.8 aperture is wider than f/2.8, for example.
Aperture has one main effect in photographs, which is depth of field. Using a wider aperture like f/1.8 will create a smaller depth of field. This will enhance bokeh, which is the popular blurry background effect in photos. Tightening the aperture will keep more in focus.
In order to take a photograph a camera needs to let light into the sensor. The camera has a shutter, which stops light from reaching the sensor until activated. When a shot is triggered, the shutter will open up and expose the sensor to entering light. The time the shutter stays open is referred to as shutter speed.
Motion blur is not always a bad thing!
Shutter speed is typically measured in seconds and fractions of a second. A shutter speed of 1/100 will expose the sensor for a hundredth of a second. Likewise a 1/2 shutter speed will last half a second. You can also leave the shutter open for multiple seconds, which is commonly referred to as a long exposure shot.
A faster shutter speed better freezes the scene. Elongating shutter speed will brighten an image, but it can also create motion blur (which is not always bad).
ISO relates to sensor (or film) sensitivity to light. A lower ISO makes the sensor less sensitive to light, meaning it either needs more illumination or a longer shutter speed to properly expose an image. Increasing the ISO makes your sensor more sensitive to light, allowing you to shoot in darker environments, with tighter apertures, and/or using faster shutter speeds.
Increasing the ISO creates more grain or noise.
ISO is measured in numbers. While manufacturers used to stick to ISO 100, 200, 400, 800, 1600, and so on (doubling in value), things have changed with more recent cameras. Smaller increments have been introduced for better refinement, but the concept is the same. ISO 100 is half as sensitive as ISO 200, which is also half as sensitive as ISO 400.
The effects of ISO are simple to understand. A higher ISO will make a sensor more sensitive, and therefore, make an image brighter. At the same time, increasing the ISO creates more grain or noise.
If you have ever seen a camera button with “+” and “-” signs in it, that would be the exposure compensation control, otherwise known as exposure value (EV). This will help when shooting in any of the auto or semi-auto modes (aperture priority, shutter priority, etc.).
Cameras try to get the right exposure by measuring light, but they don’t always get what you intended to capture. You may not even want a well-exposed image. Sometimes you want things to look a little darker to add mood, for example. With exposure compensation you can tell the camera it’s capturing exposure incorrectly, and it will make up for it by adjusting other settings (usually ISO).
Exposure compensation is usually measured by f stops like so: –1.0, –0.7, –0.3, 0.0, +0.3, +0.7, +1.0. In this case, -1.0 would be one stop less, while +1.0 is a stop higher.
The Oxford Dictionary defines dynamic range as “the ratio of the largest to the smallest intensity of sound that can be reliably transmitted or reproduced by a particular sound system.” That definition refers to audio, but the idea is similar in photography. Dynamic range relates to how much data a camera can capture at the extremes of exposure in a scene, from the darkest to the lightest parts of a scene.
Dynamic range is measured in stops, where each stop equals double or half the amount of light. Increasing exposure by one stop means doubling the light. If you were shooting at shutter speed 1/100, one stop brighter would be 1/50, while one stop darker would be 1/200.
Put simply, focal length is the distance between a camera sensor (or film) and a lense’s point of convergence.
The hardest part is understanding what the point of convergence is (also known as optical center). When light rays enter a lens they travel through glass and bend to converge in a single point. This point is where light data is collected to form a sharp image for the sensor to record. Manufacturers measure the focal length focused to infinity, to keep a standard.
Focal length is measured in millimeters. A 50mm lens will have a point of convergence that is 50mm (or 5cm) from the sensor. Focal length also determines how “zoomed in” you are, changes perspective, and affects depth of field.
Zoom types: Optical, digital, and hybrid
In photography, camera zoom refers to making a subject appear closer or farther away in an image. Zooming in gives you a closer look at objects, while zooming out will let you capture a wider space. Cameras use three types of zoom technology: optical, digital, and hybrid.
Optical zoom is achieved by using a series of lens elements. Glass can move through the lens to zoom in or out. Digital zoom achieves a similar effect without mechanical work or glass elements. It will essentially cut off areas around your scene to make it seem like you are closer to the subject. Digital zoom is technically cropping. Hybrid zoom is a whole new concept. It takes advantage of optical zoom, digital zoom, and software to get improved results when zooming in further than the lens’s physical capabilities.
White balance refers to the effects color temperature and tint have in photographs. Different light sources emit varying color temperatures, ranging in a spectrum between orange and blue. Likewise, light comes with tint, which ranges between green and magenta. Changing the white balance settings will help you find a balance between these colors and achieve a more natural effect.
Color temperature is measured in kelvins (K). In Photography we have certain white balance options to help figure out the correct kelvin levels one should use under different circumstances.
- Candlelight: 1,000-2,000K
- Tungsten bulb: 2,500-3,500K
- Sunrise/sunset: 3,000-4,000K
- Flourescent light: 4,000-5,000K
- Flash/direct sunlight: 5,000-6,500K
- Cloudy sky: 6,500-8,000K
- Heavy clouds: 9,000-10,000K
A megapixel simply means a million pixels. This term serves as a method of measuring definition in any image sensor. If a camera were to have a 12MP sensor, it means the images it takes are formed by twelve million pixels. This would be equal to a 4,000×3,000 resolution.
RAW vs JPEG
A RAW image is known as an uncompressed, unedited image file. It keeps all data captured by the sensor, making it a much larger file, but with no quality loss and more editing power. This is why RAW data by itself isn’t much to look at.
RAW should only be used if you’re planning on going back to edit your pictures.
RAW should only be used if you’re planning on going back to edit your pictures. The file sizes are much larger, but this does allow you to tweak the full exposure and color settings of your pictures, bypassing the camera’s default image processing.
While saving a picture to JPEG chucks away image data and compresses the picture, this is perfectly fine if you’re planning to upload a picture to Facebook or take a quick snap for your gallery.
OIS compensates for small movements of the camera during exposure. In general terms it uses a floating lens, gyroscopes, and small motors. The elements are controlled by a microcontroller which moves the lens very slightly to counteract the shaking of the camera — if the camera moves to the right, the lens moves left.
This is the best option due to the fact all stabilization is being done mechanically, and not through software. This means no quality is lost in the process.
Electronic image stabilization works through software. Essentially, what EIS does is break up the video into chunks and compares it to the previous frames. It then determines if movement in the frame was natural or unwanted shake, and corrects it.
EIS usually degrades quality, as it needs space from the content’s edges to apply corrections. It’s improved in the last few years, though. Smartphone EIS usually takes advantage of the gyroscope and accelerometer, making it more precise and reducing quality loss.
Smartphone cameras generally use three types of autofocus systems: dual-pixel, phase-detect, and contrast-detect. We will tell you about them in order, from worst to best.
This is the oldest of the three, and works by measuring contrast between areas. The idea is a focused area will have a higher contrast, as edges will be sharper. When an area reaches a certain contrast, the camera will consider it in focus.
“Phase” means that light rays originating from a specific point hit opposing sides of a lens with equal intensity – in other words they are “in phase.” Phase-detect autofocus uses photodiodes across the sensor to measure differences in phase. It then moves the focusing element in the lens to bring the image into focus.
This is easily among the best autofocus technologies available. Dual-pixel autofocus is like phase-detect, but it uses a greater number of focus points across the sensor. Instead of focusing on dedicated pixels, each pixel is comprised of two photodiodes that can compare subtle phase differences in order to calculate where to move the lens.
HDR accomplishes a balanced exposure throughout the frame. This is done by shooting multiple images at different shutter speeds. The idea is that each photo will expose for different light levels. This image conglomerate is then merged, becoming a single photo with much more information in both the bright and dark sections.
Pixel-binning is a process that sees data from four pixels combined into one. So a camera sensor with tiny 0.9 micron pixels will produce results equivalent to 1.8 micron pixels when taking a pixel-binned shot. This technique is mostly used in smartphones, which are forced to use smaller sensors due to size restrictions.
The biggest downside of this technique is that your resolution is effectively divided by four when taking a pixel-binned shot. So that means a binned shot on a 48MP camera is actually 12MP, while a binned shot on a 16MP camera is only four megapixels.
Portrait mode in smartphone photography
Portrait mode is a term used to describe the artificial bokeh (BOH-kay) effect produced by smartphones. Bokeh is a photography effect where the subject of a picture is kept in focus while the background falls out of focus. By using portrait mode to create a bokeh effect, you can take dynamic photographs which look more professional.
Night Mode (Dark Night, Nightscape, or whatever your manufacturer may call it) uses artificial intelligence to analyze the scene you are trying to photograph. The phone will take into account multiple factors, such as light, the phone’s movement, and the movement of objects being captured. The device will then shoot a series of images at different exposure levels, use bracketing to put them together, and bring out as much detail as it can into a single picture.
Of course, there is a lot more going on behind the scenes. The phone must also measure white balance, colors, and other elements, which is usually done with fancy algorithms most of us don’t fully understand.
Super resolution is the practice of generating a higher resolution image by taking and processing multiple lower resolution shots. By taking multiple lower resolution shots and comparing these points in each image, you’ve got the foundation for a solid, higher resolution image. What’s essentially happening is that there are minor differences between these points, and algorithms or machine learning techniques are able to use these differences to fill in the gaps and create additional detail.
Size matters in photography. Because smartphone sensors and lenses aren’t getting much bigger, smartphone manufacturers need to figure out ways to get more out of less. Enter the age of Computational Photography.
In simple terms, this refers to image improvements with the help of software and complex algorithms. Some examples of Computational Photography are AI enhancements, night mode, pixel binning, portrait mode, HDR, and others.
Bonus: Check out more photography posts!
We have more photography content for you! Take a look at some of our featured posts and tutorials to keep improving your skills.
That’s it for our look at photography terms, had enough? We haven’t! Learning never stops with photography, and neither does technology. Make sure to bookmark this page to see any future updates and additions. It’s also smart to come back and refresh your memory from time to time.
Temp Mails (https://tempemail.co/) is a new free temporary email addresses service. This service provide you random 10 minutes emails addresses. It is also known by names like: temporary mail, disposable mail, throwaway email, one time mail, anonymous email address… All emails received by Tempmail servers are displayed automatically in your online browser inbox.