Google Clips – a small plastic device, a square with sides of five centimeters, which can be attached to the shirt as a clip or put on a table. Externally, the camera is similar to the Instagram icon, popped into the real world. Color – white in front and turquoise at the back.
To turn on the device, you just need to rotate the lens. Further, according to Google’s plan, you can forget about the camera. Clips itself tracks everything that happens in its 130-degree field of view and records seven-second clips of what it sees as interesting. Over time, the device remembers faces and tries to take more pictures of “familiar” people and fewer shots of random passers-by. The same AI extends to animals. Unknown to someone else’s cat, burned by, Clips will not be seen if she has not done anything special. But your favorite pet – please, it’s enough to turn your head beautifully or raise your paw.
Unlike the smart Google Home column, which relies entirely on connecting to the cloud, Google Clips is a completely autonomous object. She looks at what’s going on around her, chooses the moment, takes a picture, sends it to her smartphone – all in her own power. For a device the size of a little more than a matchbox the possibilities are very impressive. This was told by Remi El-Oazzana, the head of the Intel team, who worked on a low-power image processor (VPU) for Clips:
We were all surprised how much intelligence Google could fit into such a small device. This smart camera shows the level of built-in AI, which we could only dream of earlier.
In order for the electronic brain working inside Clips to learn to distinguish a good photo from bad, Google collaborated with professional editors and an entire army of image evaluators. “There is no such model of machine learning that could say: the child is crawling on the floor, it probably looks good,” explained Justin Payne. Therefore, Google collected a terabyte of its own video. And then the evaluators gave him points, it was marked and became a training base for artificial intelligence. Over time, the device began to understand the psychology of people, to feel that they like what they are interested in. This process is not complete: the Google Clips coming from the plant is still learning. If you deliver two identical cameras to you and your friend, in a couple of weeks they will begin to shoot different things.
Machine learning by the Google method has a drawback. So far, Clips recognizes only humans and animals well (in fact only cats and dogs – hamsters or house guinea pigs are not interesting). You can not take a device on vacation and hope that she will be delighted with the sunset or make clips with swaying palm trees. Over time, Google is going to expand the machine learning model, to make it support more situations, to be able to understand the world.
With regard to technical specifications, the device is equipped with a sensor with 12 megapixels; it can record a series of frames at a speed of 15 fps. The internal storage – at 16 GB seems very modest, but in fact this is enough to store two full days of clips. No microphone, the device can only record images. Batteries are enough for a plus or minus three hours of active work – depends on whether there is something interesting that she can shoot.