We break down commonly used terms for beginners
Confused at the ton of tech gibberish that’s being thrown at you every time you try to buy a new phone? Is the sales pitch of the sales associate too much for you? You’ve come to the right place: we’re going to break down tech jargon to palatable bits that normal people like you can understand.
Stictly speaking, a megapixel is a unit of image sensing capacity in a digital camera. The previous rule of thumb was that the higher a megapixel count of a camera was the higher the resolution and the perceived quality of a picture was.
That’s no longer true. The race to have the highest megapixel sensor is now over, and many companies have opted to get sensors with LESS megapixels, but with better light gathering ability via UltraPixel tech (and derivatives of it).
Picture quality is a complicated mix of megapixels, sensor sizes, aperture size and the image processing techniques used to mash em all together. Just remember that a higher megapixel count doesn’t always mean a better photo.
An image sensor is the part of the camera that actually captures light and transforms it into the image that you can see. The bigger the image sensor is, the more light it can collect and thus the better low-light performance can be. But since the space on a phone is limited, manufacturers opt to increase the pixel count without increasing the size of the sensor, which compromises light gathering ability in favor of resolution.
One of the reasons why manufacturers are using lower-resolution sensors compared to previous years is the fact that they’re sacrificing resolution for better light-gathering ability. Similar to HTC’s UltraPixel tech, many companies are now opting for low resolutions for better light gathering capability, improving low-light performance.
In photography terms, aperture is the measure of how much light a lens lets in. The lower the number, the more light gets in and the better the camera does in low-light situations. The lower the aperture, the shallower the depth of field is, which means you’re getting a more pronounced blurred background (bokeh) when you’re taking a shot.
High-end phones like Huawei’s P20 Pro has aperture openings of f/1.8 and lower, while most mid-range smartphones have openings of f/2.2 to f/2.0. Samsung has the bragging rights of being the only brand that has a phone with variable apertures, with the Galaxy S9 and S9 Plus able to switch between a f/1.5 and f/2.4 aperture when needed.
Optical and Electronic Image Stabilization
Image stabilization does exactly what it sounds like – it stabilizes the camera lens when you’re taking a shot so your photo is nice, crisp and blur-free. There are two types: optical image stabilization uses mechanical means to keep the lens still when you’re shaking while electronic or digital image stabilization uses software to compensate for the shake. The GIF above (taken from Inam Ghafoor’s demonstration of OIS at work) clearly shows how OIS compensates for movement for better photos.
Phase Detection Autofocus (PDAF)
Phase Detection Autofocus or PDAF is a kind of focusing technology that allows for fast focusing times compared to contrast detection tech used in other cameras. Focusing times are paramount when shooting fast moving subjects, with PDAF-equipped cameras delivering quick, sub 0.3s focus when you need it. The video below from B&H Photo talks about PDAF on full sized cameras, but the basic idea applies to smartphones.
PDAF is usually accompanied by another kind of focusing tech in more expensive phones: laser AF. Again, the tech does exactly what it sounds like – shooting a laser at a subject and using the reflection to calculate distance and focus. Laser AF is good at shooting subjects up close and personal, however, it suffers when it’s shooting far away subjects. That’s why phones that have laser AF usually pair it with a PDAF or a more traditional contrast focusing system.
Shooting in RAW format
When a smartphone camera takes an image, it’s usually compressed and processed as a JPG file to save space on the phone. Shooting in RAW format simply means you’re saving the photo as it was seen by the camera sensor. It allows users to the freedom to correct stuff like exposure levels easier since there’s more information available.
AI tech in cameras is fairly new, and right now there isn’t a straight answer or definition that actually tells you what an AI-assisted camera does. The most common application though is scene detection, which the camera tries to guess what you’re shooting (portrait, food, lanscape, etc) and adjusts contrast, saturation and color accordingly to make the photo pop.
Artificial bokeh mode, AKA portrait shooting
It’s the ability to blur the background against the foreground, mimicking shots taken with low aperture lenses forming “bokeh”. Many phones with dual camera systems have this feature in one way or another, but some phones are better at it than others.
Dynamic range is the ability of a camera to capture both highlights and low-lights in photos, especially in high contrast situations. It’s the measurement between the whitest whites and the blackest blacks. Photos like the one above, taken by Samsung’s Galaxy S9 Plus, show detail in both the brightly lit sky and the street below it without detail being lost in the scenes.