When Google introduced Night Sight on the Pixel 3, it was a revelation.
It was like someone had literally turned on the lights in your low-light photos. Shots that were previously impossible became possible: no tripod or flash needed.
Five years later, taking photos in the dark is obsolete: almost every phone across the price spectrum comes with some form of night mode. The video, however, is a different story. Night modes for stills capture multiple frames to create a brighter image, and it's simply not possible to copy and paste the mechanics of that feature into a video that, by its nature, is already a series of images. The answer, as it seems to be lately, is to turn to ai.
When the Pixel 8 Pro launched this fall, Google announced a feature called Video Boost with Night Sight, which would arrive in a future software update. It uses ai to process your videos, bringing out more details and enhancing color, which is especially useful for low-light clips. There's just one catch: this processing is done in the cloud on Google servers, not on your phone.
As promised, Video Boost started arriving on devices a couple of weeks ago with the December Pixel update, including my Pixel 8 Pro review unit. And it's good! But it's not the defining moment that the original Night Sight was. That speaks both to how impressive Night Sight was when it debuted and to the particular challenges that video presents to a smartphone's camera system.
Video Boost works like this: First, and crucially, you need to have a Pixel 8 Pro, not a normal Pixel 8: Google did not answer my question about why this is the case. You turn it on in your camera settings when you want to use it and then you start recording your video. Once you're done, you need to backup the video to your Google Photos account, either automatically or manually. So you wait. And wait. And in some cases, you're still waiting: Video Boost works on videos up to ten minutes long, but even a clip just a couple of minutes long can take hours to process.
Depending on the type of video you're shooting, that wait may or may not be worth it. Google Support Documentation says it's designed to let you “make videos on your Pixel phone at higher quality and with better lighting, colors and details,” in any lighting. But major What Video Boost is serving is improving low-light video; That's what the group's product manager, Isaac Reynolds, tells me. “Think of it as Night Sight Video, because all the adjustments to the other algorithms are aimed at Night Sight.”
All the processes that make our well-lit videos look better (stabilization, tone mapping) stop working when you try to record video in very low light. Reynolds explains that even the gentle The degree of blur you get in low-light videos is different. “OIS (optical image stabilization) can stabilize a frame, but only of a certain length.” Low-light videos require longer frames and that's a big challenge for stabilization. “When you start walking around in low light, with such long frames, you can get a particular type of intra-frame blur that is just the residue that OIS can compensate for.” In other words, it's very complicated.
All of this helps explain what I see in my own Video Boost clips. With good lighting, I don't see much difference. Some colors pop a little more, but I don't see anything that would compel me to use it regularly when there is a lot of available light. In extremely Video Boost in low light can recover some colors and details that are completely lost in a standard video clip. But it is not as dramatic as the difference between a normal photo and a Night Sight photo under the same conditions.
However, there is a sweet spot between these extremes, where I can see Video Boost being really useful. In a clip where I walk along a path at dusk towards a dark pergola houses the Kobe bell, there is a noticeable improvement in shadow detail and stabilization after the Boost. The more I used Video Boost with normal and medium-low indoor lighting, the more I saw its benefits. You start to see how washed out standard videos look in these conditions, like my son playing with trucks on the dining room floor. Turning on Video Boost restored some of the vibrancy I had forgotten I was missing.
Video Boost is limited to the Pixel 8 Pro's main rear camera and records in 4K (the default) or 1080p at 30fps. Using Video Boost results in two clips: an initial “preview” file that has not been enhanced and is immediately available to share, and finally the second “enhanced” file. However, under the hood, there is a lot more going on.
Reynolds explained to me that Video Boost uses a completely different processing process that retains much more captured image data that is typically discarded when recording a standard video file, sort of like the relationship between RAW and JPEG files. A temporary file saves this information on your device until it is sent to the cloud; after that, it is deleted. That's good, because temporary files can be huge: several gigabytes for longer clips. The final enhanced videos, however, are a much more reasonable size: 513 MB for a three-minute clip I recorded versus 6 GB for the temporary file.
My initial reaction to Video Boost was that it seemed like a stopgap: a feature demo of something that needs the cloud to work right now, but would be ported to the device in the future. Qualcomm showed off an on-device version of something similar this fall, so that must be the end of the game, right? Reynolds says that's not how he thinks about it. “The things you can do in the cloud will always be more impressive than the things you can do on a phone.”
The distinction between what your phone can do and what a cloud server can do will take a backseat
Case in point: It says that right now, Pixel phones run several smaller, optimized versions of Google's HDR Plus model on the device. But the full “core” HDR Plus model that Google has been developing for the last decade for its Pixel phones is too big to realistically run on any phone. And the ai capabilities on the device will improve over time, so it's likely that some Things that can only be done in the cloud will move to our devices. But equally, what is possible in the cloud will also change. Reynolds says he thinks of the cloud as just “another component” of Tensor's capabilities.
In that sense, Video Boost is a glimpse into the future: it's simply a future where the ai in your phone works hand-in-hand with the ai in the cloud. More functions will be handled by a combination of on-device and off-device ai, and the distinction between what your phone can do and what a cloud server can do will take a backseat. It's not the “aha” moment that Night Sight was, but it will still be a significant change in the way we think about our phone's capabilities.