Apple is beginning to act on its generative AI (genAI) plans. Machine vision intelligence at scale generates huge quantities of data. Not all of that data should exist, so how does one ensure the information that is being used and conceivably stored is appropriate, of good quality, and legitimate?
Not only this, but how can a company use public sourced video data to train machine intelligence models without breaching privacy law? Apple has an idea for this. It’s allegedly preparing to acquire brighter AI, according to 9to5Mac.
So, who is brighterAI?
The German firm, recently called “Europe’s hottest AI start-up,” describes itself as generative AI for privacy."
Based on deep learning tech, the company’s solutions anonymize images and videos to help companies meet data protection regulations. This goes a little deeper than you think. Apple already has its own tech for tasks like deleting people and car registration data from images in Maps.
Brighter AI was founded in 2017 as a spin-off from German automotive supplier, HELLA, itself acquired in 2021 by Faurecia. (There’s additional food for thought in that Faurecia is part of the Forvia Group, which also offers a mobile app store.)
What the tech does
The German start-up’s tech goes further, identifying what is seen, anonymizing it, and then returning a natural seeming image. In other words, it may take a person’s face and then anonymize it, returning what seems to be a natural face that does not look like the original. Or, in the case of a vehicle it might identify the registration plate and replace it with a fake one.
The idea here is that the images created within the process seem as normal as any other image, but don’t abuse privacy. This could enable companies to use recorded camera data for analytics and AI.
“With its proprietary Deep Natural Anonymization solution, brighter AI protects the identities of the people recorded," the company website says. "At the same time, companies from the automotive, healthcare, and the public sector can use anonymized data for analysis and machine learning without violating privacy. In this way, AI learning models and privacy go hand in hand."
In brief, the solution secures privacy for people in both new and existing video data. That’s going to be important from the point of view of privacy legislation and for training machine learning models by widening the stack of available data.
Why does the acquisition matter?
To begin, we know Apple has a mission with visionOS, and that within that mission those cameras on those systems will be picking up a lot of information. These devices effectively become data centers.
Apple doesn’t want to become responsible for all that data. Not only would doing so be an abuse of user privacy, but it would also ignore other people’s data privacy needs.
So, for Apple, it makes sense to ensure the systems it builds simply don’t gather the information in the first place. (That’s essentially the mantra that drives Apple’s entire approach to privacy and machine intelligence).
Implications for machine imaging and data management
The other way this might be of use is data grading. When asked to do a task, even intelligent machines will analyze all the information they're given, so why not eliminate data that is irrelevant, personal, or otherwise cannot be used?
There are at least three kinds of information you see each time you open your eyes – irrelevant data, situational data, and essential data.
When it comes to making decisions or augmenting environments using information picked up by on-device cameras, it’s just sensible to ignore the first category entirely. That’s obviously easier to do with still images, but Vision Pro shows that transient moving image data is also part of the future of computing.
The same tech Apple is building for Vision Pro might eventually be used to empower semi-autonomous vehicles. The cameras on those vehicles must see the world around them and make good real-time driving decisions in challenging, life-critical environments. Vehicle manufacturers know they must apply data anonymization techniques to stay on the right side of international privacy law as they gather images for their autonomous systems. Brighter AI even blogged about this earlier this year.
How it might be used
That’s later down the road (literally), but for now I suspect there will be numerous ways in which this tech is deployed.
- Apple will probably use it as a form of privacy screen.
- Application developers will use the same tech for their apps.
- There might also be implications in later data lifecycle management.
- There could be implications in copyright. (Though copyright holders will argue that even if original footage is anonymized, it was still based on their work.)
- This is likely just one component that reflects Apple’s future plans for use of AI in its products, particularly within its machine vision intelligence teams.
Apple CEO Tim Cook last week said of genAI, "We've got some things that we're incredibly excited about that we'll be talking about later this year.”
Analysts meanwhile are becoming excited. Wedbush analyst Daniel Ive predicted a “major growth cycle” for the company.
Morgan Stanley’s Apple analyst, Erik Woodring, put numbers around similar expectations, telling clients in a recent note: “Ultimately, we believe that new LLM-enabled software features can help accelerate iPhone replacement cycles, especially given the component requirements to run AI in the cloud & on-device, with every 0.25 year decrease in iPhone replacement cycles driving ~6% upside to our FY25 iPhone unit and revenue forecasts.”
All in all, we’re certainly seeing Apple make the first steps in what will turn out to be 2024’s Apple AI arms race.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.