Apple: Stop arguing, the new “iPhone” in the AI era is still iPhone

In the AI arms race, Apple is actually not “late”.

Written by: Lian Ran

Editor: Jing Yu

**Source: **Geek Park

Everyone wants to know, in the seemingly coming AI era, who will become the new “iPhone”?

It is very likely that the “iPhone” in the AI era will still be the “iPhone”.

A recent paper shows that Apple researchers have solved the key problem of deploying large models on devices with limited memory by building an inference cost model that is coordinated with flash memory behavior to guide optimization in two key areas: Reduce the amount of data transferred from flash memory and read data in larger, more contiguous blocks.

As a result, the applicability and accessibility of large models are expanded, and Apple’s plan to integrate generative AI into iOS 18 may also be accelerated.

In the past year, since ChatGPT was first launched to the public in November 2022 and the world has entered the craze of generative artificial intelligence, Apple, as the world’s largest technology company, has paid close attention to the most important technological development in the past decade - generative artificial intelligence. Artificial intelligence is rarely mentioned positively.

Apple Inc. | Source: medium

The outside world may think that Apple’s layout in the field of artificial intelligence is relatively backward, but a series of trends show that Apple has already deployed it, but has not officially made it public.

Since 2023, Apple has accelerated the development of independent artificial intelligence technology. Apple has not only set up a dedicated AI team to develop large-scale language models, but also vigorously applies speech, image and other recognition technologies on the product side; earlier, dozens of mergers and acquisitions have laid the technical foundation for Apple, especially behind the voice assistant Siri Its technological accumulation gives it an advantage in fields such as voice interaction; and with 2 billion active devices in hand, Apple is expected to become the number one player in the field of AI consumer applications.

The more obvious changes started from the “Scary Fast” conference at the end of October. At this conference, Apple released the M3 Pro, which is 40% faster than the M3, and the M3 Max, which is 250% faster. It also emphasized that the M3 Max with a 16-core CPU and 40-core GPU will be used to develop AI software. At the same time, Apple has clearly positioned the new MacBook Pro as a tool for developers to create AI products.

The emergence of super-powerful chips that support AI has laid the foundation for the explosion of Apple’s AI. But in fact, Apple’s accumulation in AI is far deeper than any other giant.

01 Apple “Be Prepared”

One billion US dollars per year is the rumored investment figure for Apple’s AI plan.

According to Bloomberg, in July 2023, Apple built its own large-scale language model Ajax and launched an internal chatbot code-named “Apple GPT” to test Ajax functions. The key next step is to determine whether the technology meets competitive standards and how Apple will implement it into existing products.

Where did all this $1 billion go?

Spend a lot of money to build an AI team

John Giannandrea and Craig Federighi, Apple’s senior vice presidents of artificial intelligence and software engineering, are leading these efforts. On Cook’s team, they are known as the “executive sponsors” driving generative AI projects. It is reported that Eddy Cue, Apple’s senior vice president of services, is also involved. The above three people can currently spend about $1 billion on the project each year.

John Giannandrea |Image source: apple

The recruitment and construction of the AI team outside the core team has been underway since the end of April. More than a dozen ads on the job page at the time were seeking machine learning experts in the field of generative artificial intelligence “who are passionate about building extraordinary autonomous systems.” The openings were spread across multiple teams in San Diego, the San Francisco Bay Area, and Seattle, including Integration System experience team, input experience NLP team, machine learning R&D team and technology development team.

Some of the positions specifically focus on vision-generative artificial intelligence applications, with candidates working on “visual generative modeling to support applications such as computational photography, image and video editing, 3D shape and motion reconstruction, and avatar generation.”

There were reports in September that Apple was actively recruiting talent from Google and Meta Platforms’ artificial intelligence teams. Since AXLearn was uploaded on GitHub in July this year, 7 of the 18 people who have contributed to it have worked at Google or Meta.

In fact, Giannandrea and Ruoming Pang, an expert in the field of neural networks, both came from Google. Giannandrea has been developing advanced artificial intelligence systems during his 8 years at Google. Giannandrea and Pang persuaded Apple to use Google Cloud, especially using Google Cloud’s customized Zhang. Processing unit (TPU) chip for machine learning training. AXLearn, a machine learning framework developed for training Ajax GPT, is based in part on Pang’s research.

In recruitment information in October, Apple’s talent requirements for generative artificial intelligence have become more clear. For example, a job description on the App Store platform reads: “The company is developing a developer experience platform based on generative artificial intelligence. , for internal use and to assist our app development teams." Another post in the retail department mentioned that Apple is developing a “conversational AI platform (voice and chat)” to interact with customers, "long text generation, The task of building text generation technology such as “summary, question and answer” also appears in Apple’s recruitment information.

In other job postings in Apple’s artificial intelligence/machine learning field, some positions emphasize the importance of basic models and list “human-like conversational agents” as examples of applications that may be developed through these models. At the same time, Apple also released some job requirements involving departments such as Siri Information Intelligence, which is responsible for handling functions of products such as Siri and Spotlight search. In addition, Apple is actively looking for talents who can implement model calculations on local devices.

Accelerate the research and development of underlying technology

In addition to talent, technical preparations are also in progress. Giannandrea is reportedly overseeing the development of the technology underlying the new artificial intelligence system, and his team is revamping Siri to implement it further. A smarter version of Siri could be available as soon as next year.

In terms of software, Federighi is leading the development of new artificial intelligence iOS, and the application experience including iMessage and Siri will be improved. Apple is said to have issued a directive to add functionality to iOS that runs on large language models, which will use large amounts of data to improve artificial intelligence capabilities. The new features will improve Siri and Messages application processing issues and automatic sentence completion. Ability.

Apple iOS 17 system|Image source: apple

Software engineering teams are also considering integrating generative AI into development tools like Xcode, a move that could help application developers write new applications faster. This would bring it in line with services like Microsoft’s GitHub Copilot, which provides autocomplete suggestions to developers as they write code.

Eddy Cue is pushing to add artificial intelligence to as many applications as possible, including Apple Music, Pages, Keynote, etc., such as exploring music to automatically generate playlists. Earlier this year, Spotify partnered with OpenAI to launch such a feature; studying how Use generative AI to help people write in apps like Pages, or automatically create slideshows in Keynote (similar to the Word and PowerPoint apps Microsoft already has). Apple is also testing generative artificial intelligence for internal customer service applications for its AppleCare team.

“iPhone” on the large model

However, whether generative artificial intelligence should be deployed on the device, based on cloud settings, or run in a way between the two, it seems to have been undecided: running on the device is undoubtedly faster and is also conducive to protecting user privacy, but Through cloud deployment, Apple’s large language model can achieve more complex and sophisticated operations. Both options have pros and cons, and Apple is trying to find a balance between local computing and cloud computing.

There are reports that Apple will offer a combination of cloud-based AI and AI processed on-device. However, multiple former Apple machine learning engineers said that Apple’s leadership prefers to run software on devices rather than on cloud servers for the sake of improving privacy and performance.

Senior Vice President Giannandrea One of the basic principles for developing Apple’s artificial intelligence is respect for privacy. He once said in an interview: “I understand that the larger the model in the data center, the more accurate it will be, but it is best to run the model close to the data rather than moving the data around.”

However, this may be very difficult to implement. Some analysts say that taking Ajax GPT as an example, it has been trained on more than 200 billion parameters. Parameters reflect the size and complexity of the machine learning model; the greater the number of parameters, the higher the complexity and the greater the storage space and computing power required. An LLM with more than 200 billion parameters may not be reasonably placed on an iPhone.

But the latest news shows that Apple may have made a terminal decision. Recently, Apple released a research paper showing that it has found a way to run large models on the iPhone, “Building an inference cost model that is coordinated with flash memory behavior to guide optimization in two key areas: reducing the data transferred from flash memory volume, and reading data in larger, more contiguous chunks.”

The paper states that the new technology enables large models to run 25 times faster on devices with limited memory, which means that complex AI models that were originally unable to run on small devices due to resource constraints will soon be able to run on iPhones and iPads. Wait for it to run on consumer mobile devices.

02 AI’s number one buyer: “Spend a small amount of money to do big things”

Although the outside world perceives Apple to be slower than other giants in deploying cutting-edge technologies such as generative AI, Apple also has its own confidence in the field of AI.

Research firm PitchBook, which has tracked Apple’s multiple artificial intelligence acquisitions, concluded that Apple focuses on acquiring top talent teams in various fields that can apply machine learning technology to specific consumer products. It can be seen that Apple’s acquisition strategy is mainly focused on consumer applications of artificial intelligence, but also includes machine learning deployment and operating technology for edge devices, as well as limited bets on deep learning and more horizontal technologies.

According to statistics, starting from the acquisition of Siri in 2010, Apple has acquired more than 30 AI startups in 10 years, including Shazam, primeSense, Turi, Lattice Data, Xnor.ai and other five companies, with acquisition amounts of less than 2 One hundred million U.S. dollars. Since 2017, Apple has acquired 21 artificial intelligence startups, almost twice the number of acquisitions by Microsoft and Meta, and can be called the “number one buyer of AI.” Starting from 2021, Apple’s pace of acquiring AI companies seems to have slowed down, but it still acquired start-up AI companies such as Curious AI, AI Music, and WaveOne.

Incomplete statistics of Apple’s acquisitions in the AI field from 2010 to the present|Geek Park

Overall, Apple’s acquisition strategy in the field of AI can be summarized as “spend a little money to do big things.” Apple rarely makes large-scale mergers and acquisitions, and its acquisition targets are usually start-ups whose technologies can be closely integrated with existing products and services and help improve ecological construction. When Apple acquires a company, the main consideration is often how the company’s technology can be better integrated into the projects Apple is developing.

Overall, the transaction amounts for Apple’s acquisitions of these companies are generally low, and the latter’s technical directions mainly cover speech recognition and conversation, followed by facial recognition and image recognition. In terms of implementation, these acquired technologies It provides support for many existing Apple products and services such as improving Siri voice assistant, supporting Face ID facial recognition, optimizing picture applications, enhancing music service functions, and improving weather forecast accuracy.

Many of Apple’s acquisitions seem to be aimed at improving Siri, which shows Siri’s important position in Apple’s systems. For example, the acquisition of Inductiv was to improve Siri’s data, the acquisition of Voysis was to improve Siri’s understanding of natural language, and the acquisition of PullString was to make it easier for iOS developers to use Siri functions in their applications.

There are also some acquisitions that are aimed at future products. For example, Apple acquired the self-driving startup Drive.ai in 2019, possibly to promote the development of its self-driving car project. Apple does not disclose all acquisition information, so there may be other artificial intelligence companies acquired by Apple that are not known.

03 AI has long been integrated into Apple systems

In addition to numerous mergers and acquisitions, Apple’s own AI development can be traced back many years. From the launch of the Knowledge Navigator in 1987, to the launch of the speech recognition project in 1990, to the launch of Siri in 2011 as the first consumer voice assistant, Apple actually showed its exploration of AI very early, but it has always been relatively low-key.

Apple has historically not been the first to launch new technology, especially technology that has not been proven by consumers. For example, not long after the MP3 player was launched, the market prospects had been verified, but Apple did not enter the market immediately. Instead, it only joined after identifying a superior solution such as the iPod.

The same is true in the field of mobile phones. Although other companies launched smartphones early, Apple chose to enter the market in 2007 only after ensuring that it could provide an excellent customer experience. Similarly, although tablets have been around since 1989, the product category failed to gain traction until Apple launched the iPad.

Apple has always put consumer experience first, and usually waits until the technology matures before officially commercializing it. This prudent strategy avoids the risk of unstable technology in the early stage, and also allows Apple to better seize market opportunities and launch more mature and Products that go above and beyond.

Therefore, Apple should follow the same path for products similar to ChatGPT - it will not launch them hastily before it is ready. That is, while maintaining a sense of mystery, Apple should eventually launch mature AI products in its own way.

In fact, there are already many machine learning/AI applications in Apple’s existing products:

Image Processing

Apple has optimized photos taken by the iPhone camera by using machine learning technology, including Deep Fusion to reduce image noise and iPhone 15 Portrait Mode tools.

iPhone 15 can detect whether there are people in the picture and automatically capture rich depth information|Image source: apple

  • Visual Search - Machine learning powers the iPhone’s ability to detect photo content.
  • iPhone 15’s upgraded camera can use machine learning to distinguish between people and animals in the shot. *Digital portrait: The part where the Apple Vision Pro front lens scans face information is the “digital avatar” generated by Apple for the user based on machine learning technology.

Speech processing

  • Personal Speech Synthesis, Real-time Speech Transcription: iPhone 15 supports Personal Speech, which allows users to synthesize a voice similar to their own to pronounce the text they type in FaceTime and phone calls, as well as Live Voicemail, with real-time transcription of messages.

Search engines and suggestion systems

  • Spotlight Search: Spotlight search and search across the iOS operating system are powered by artificial intelligence.
  • Siri Suggestions: When iPhone provides suggestions, such as sending a birthday greeting or adding an event from Mail to your calendar, it’s machine learning algorithms behind it.
  • Input method: Based on device-side machine learning, the input method can automatically improve the model according to each user’s typing. In addition, based on the more advanced word prediction Transformer language model, the input method can better understand the user’s language habits and greatly improve the accuracy of input. Sex etc.
  • AutoCorrect: Apple’s AutoCorrect system and word suggestion options are powered by machine learning.

Health Monitoring

  • EKG: The EKG feature on Apple Watch can view heart rhythm data to determine whether the user may be experiencing atrial fibrillation.
  • Collision detection and fall detection - Through machine learning, Apple devices can determine whether the user has collided or fallen based on information collected by various sensors.
  • (Apple Watch Series 9 and Ultra 2 - Integrated AI into double-click functionality for easier tasks, brighter display, smarter Siri and advanced health monitoring.)

Another possible application is in cars - Apple’s Project Titan, a self-driving car project. Although the secret project is called Apple Car, it is still uncertain whether Apple will actually launch a car.

According to reports, the autonomous system being developed in Project Titan requires a brain, which is where Apple’s artificial intelligence comes in. Many of the technologies introduced for Apple Vision Pro may also play a huge role in automotive projects after amplification.

For Apple’s advanced neural engine, it is a difficult task for the system to coordinate in real time, detect objects, understand user commands and generate feedback at the same time. However, a report in 2021 showed that Apple has completed such a chip and will begin testing.

Next, the application of AI in Apple products should continue to innovate in aspects such as image processing, search recommendations, and environmental perception. However, there is a possible potential flaw in its ecosystem, that is, Apple insists on pursuing data privacy and product design, which may limit its deployment speed in cutting-edge technologies such as generative AI, just as the public now perceives it as “slow” “One step” is the same.

But these still cannot conceal Apple’s huge development potential in the field of AI, which is mainly based on the following three factors:

On the one hand, Apple has more than 2 billion devices running the iOS operating system worldwide, which has laid a huge user base for its future development of AI applications. According to Apple Chief Financial Officer Luca Maestri, as of February 2023, Apple’s active device installed base has exceeded the 2 billion mark, and at the end of the June quarter it reached “in every region. All market segments have reached record highs.

On the other hand, Apple’s Siri voice assistant handles 25 billion requests per month, reflecting strong consumer demand for AI tools such as voice assistants. If Apple can launch products similar to ChatGPT in the future, the scale of AI data and interactions from its consumers will be huge.

What cannot be ignored is that the number of Apple paid subscribers is growing rapidly, exceeding the 1 billion mark and maintaining double-digit growth. In the third quarter financial report in August 2023, Apple CEO Cook pointed out that Apple’s “revenue in the service field hit a record high” and “paid subscriptions exceeded 1 billion and were growing at a double-digit rate.” This The situation has laid a solid foundation for Apple to achieve revenue growth through AI applications. Relying on such a broad user base, Apple has broad room for growth in the consumer AI market.

In addition, there is news that Apple is investing heavily in artificial intelligence servers and plans to build hundreds more in 2024 to prepare for its upcoming artificial intelligence era. In the future, Apple still has great potential to become the number one platform for consumer AI applications.

Reference:

  1. Inside Apple’s Big Plan to Bring Generative AI to All Its Devices,bloomberg
  1. Apple may be quiet on AI, but it’s also the biggest buyer of AI companies,quartz
  1. Apple Boosts Spending to Develop Conversational AI,the information
  1. LLM in a flash: Efficient Large Language Model Inference with Limited Memory
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • بالعربية
  • Português (Brasil)
  • 简体中文
  • English
  • Español
  • Français (Afrique)
  • Bahasa Indonesia
  • 日本語
  • Português (Portugal)
  • Русский
  • 繁體中文
  • Українська
  • Tiếng Việt