Apple is reportedly considering the introduction of advertisements in Apple Maps as part of its ongoing efforts to expand its Services revenue, according to a report from Bloomberg’s Power On newsletter. The company recently held an all-hands meeting to discuss potential monetization strategies for its navigation app, marking a renewed interest in a concept that was first explored back in 2022.
At that time, reports indicated that search ads were already in development, with an expected rollout in 2023. However, the plan never materialized. Now, Apple appears to be revisiting the idea, though there is no official confirmation of an imminent launch or active development at this stage.
If Apple moves forward with this initiative, advertising in Maps could follow a model similar to Google Maps, where businesses pay to appear higher in search results. Additionally, Apple might allow businesses to sponsor locations on the map itself, giving retailers and restaurants greater visibility within the app.
Apple has already integrated advertising into several of its platforms. The App Store features sponsored applications in search results and on the home page, while Apple News displays mid-article ads, sponsored content, and paid subscriptions. Late last year, Apple even transitioned its ad sales to an in-house team, signaling a broader push into digital advertising.
Expanding advertising efforts in Apple Maps aligns with Apple's broader strategy to increase its Services revenue, a segment that has become a major growth driver for the company. According to Apple Insider, Apple held a 51% share in programmatic advertising as of August 2024—an increase of 6% year-over-year—and generated approximately $10 billion in ad-related revenue in 2024 alone.
Bloomberg also suggests that Apple may extend advertising to Apple TV+, beyond its existing MLS and MLB game sponsorships. If implemented, this move could mark a significant shift in Apple’s traditionally ad-light ecosystem, further solidifying its presence in the digital advertising space.
While Apple has not yet confirmed a timeline or specific plans for Apple Maps ads, it’s clear the company is actively exploring new ways to monetize its services—a shift that could reshape how users experience the Apple ecosystem in the years ahead.
>>>Get the best batteries for your business and professional needs here at Batteryone.co. Get in touch with us today for all your battery needs.
>>>If you want to catch more latest trending stories, please visit:Batteryone.co/blog.
One year after its highly anticipated launch, the Apple Vision Pro remains far from the blockbuster success the company had envisioned. Priced at an eye-watering $3,500, the device has struggled to capture widespread consumer interest. Reports surfaced just two months after release that Apple had cut shipment targets in half, citing weaker-than-expected demand. Internally, concerns extend beyond pricing—data suggests that even early adopters are using the device less frequently than anticipated.
To reinvigorate sales, Apple is now turning to software. According to Bloomberg’s Mark Gurman, a major update is on the way as part of visionOS 2.4. The beta version could arrive this week, with a full rollout expected in April. While software improvements may enhance the user experience, the critical question remains: Can they make the Vision Pro compelling enough to justify its steep price tag?
The most significant addition in the upcoming update is Apple Intelligence, the company’s proprietary generative AI system. Previously exclusive to newer iPhones, iPads, and Macs, this technology will now leverage the Vision Pro’s M2 chip and 16GB of RAM to enable advanced on-device processing.
With Apple Intelligence, Vision Pro users will gain access to several AI-driven features, including:
While these additions bring some valuable functionality, there is no confirmation of AI features designed specifically for Vision Pro. This raises concerns that Apple is simply repurposing existing AI tools rather than delivering innovations tailored to the augmented reality (AR) experience.
Beyond AI, Apple is developing new features aimed at making Vision Pro more appealing. One notable addition is a new spatial content app, designed to showcase 3D images and panoramas sourced from various providers. The goal is to highlight the device’s immersive capabilities in a way that static images or traditional videos cannot.
Additionally, Apple plans to release a new immersive video experience focused on arctic surfing, set to debut on February 21. While these efforts demonstrate Apple’s commitment to enhancing the content ecosystem for Vision Pro, they do not fundamentally change the device’s usability or accessibility for mainstream consumers.
Another key update will introduce an improved guest mode, making it easier for multiple people to use the same Vision Pro headset. While this may seem like a minor tweak, Apple hopes that allowing users to share their device with friends and family will help generate more interest and, ultimately, lead to additional sales.
However, there’s an inherent flaw in this strategy: Vision Pro remains a fundamentally personal device, requiring custom optical inserts for many users. While guest mode may allow for casual demonstrations, it does little to solve the device’s accessibility and affordability challenges.
Despite these updates, the Vision Pro’s biggest obstacle remains its prohibitive price. At $3,500, it costs nearly seven times as much as the Meta Quest 3, a competing mixed-reality headset that offers a more affordable entry point into AR and VR experiences.
Even with AI integration and new immersive content, it’s difficult to imagine software updates alone convincing mainstream consumers to make such a significant investment. While Apple’s ecosystem lock-in may encourage some users to buy Vision Pro for the sake of continuity across their devices, many of the new features—such as AI writing tools—are already available on more practical and accessible devices like the MacBook and iPhone.
Apple’s upcoming visionOS 2.4 update will undoubtedly improve the Vision Pro experience, but it’s unlikely to fundamentally change the product’s trajectory. While AI tools, immersive content, and multi-user functionality add value, they do not solve the device’s core weaknesses—its high price, limited real-world utility, and lack of a compelling must-have feature.
At this point, Apple seems to be relying on incremental software updates to buy time while it works on future hardware improvements. Consumers, however, may not be willing to wait. To achieve true mainstream adoption, Apple will likely need to introduce a second-generation Vision Pro, one that either dramatically lowers the price or delivers truly groundbreaking features that justify its cost.
Until then, Vision Pro may remain a niche product—one that showcases Apple’s technological ambition but fails to reach its full commercial potential.
Apple’s foray into artificial intelligence, Apple Intelligence, has been underwhelming, to say the least. The most glaring failure? Its news summaries, which faced widespread backlash for misreporting headlines and generating false information. The issue became so severe that Apple paused the entire feature this week until it can be fixed.
None of this should come as a surprise. AI “hallucinations”—instances where AI models generate incorrect or misleading information—are a well-documented issue with large language models (LLMs). To date, no one has found a true solution, and it's unclear whether one even exists. But what makes Apple’s situation particularly reckless is that its own engineers warned of these deficiencies well before the company launched its AI system.
>>>S360X3B Replacement Battery for Insta360 X3
Last October, a group of Apple researchers published a study evaluating the mathematical reasoning capabilities of leading LLMs. The yet-to-be-peer-reviewed research added to the growing consensus that AI models don’t actually “reason” in the human sense.
"Instead," the researchers concluded, "they attempt to replicate the reasoning steps observed in their training data."
In other words, these AI models aren’t truly thinking—they’re just mimicking patterns they’ve seen before.
To test AI reasoning, Apple’s researchers subjected 20 different models to thousands of math problems from the widely used GSM8K dataset. These problems weren’t particularly difficult—most could be solved by a well-educated middle schooler. A typical question might read:
"James buys 5 packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay?"
The key test came when researchers changed the numbers in the problems to ensure the AI models weren’t just memorizing answers. Even this minor tweak caused a small but consistent drop in accuracy across all models.
But when the researchers went further—changing names and adding irrelevant details (such as mentioning that some fruits in a counting problem were "smaller than usual")—the results were disastrous. Some models saw accuracy drop by as much as 65%.
Even the best-performing model, OpenAI’s o1-preview, saw a 17.5% decline, while its predecessor, GPT-4o, dropped by 32%. These results exposed a critical weakness: AI struggles not just with reasoning, but with identifying relevant information for problem-solving.
>>>U914479PHV Replacement Battery for iData K3S
The study’s conclusion was damning."This reveals a critical flaw in the models' ability to discern relevant information for problem-solving," the researchers wrote. "Their reasoning is not formal in the common sense term and is mostly based on pattern matching."
Put simply, AI models are great at appearing intelligent, and they often deliver the right answers—but only as long as they can copy and repackage solutions they've seen before. Once they can’t rely on direct memorization, their performance crumbles.
This should have raised serious concerns about trusting an AI model to summarize news—a process that involves rearranging words while preserving meaning. Yet, Apple ignored its own research and pushed forward with Apple Intelligence anyway.
Then again, this trial-and-error approach has become standard practice across the AI industry. Apple’s misstep may be frustrating, but it’s hardly surprising.
Apple's quest to embed Face ID technology directly into the display of future iPhones has been a long-running project. While the technical challenges are significant, a newly granted Apple patent suggests the company may have found a promising solution.
Former Apple design chief Jony Ive envisioned the ultimate iPhone as "a single slab of glass," free from bezels, notches, or cutouts. While Ive has since left the company, Apple continues to pursue this design goal. Achieving such a seamless display requires embedding all Dynamic Island components, including the front-facing camera and Face ID sensors, beneath the screen.
The front-facing camera presents a longer-term challenge, as current technology cannot deliver the image quality expected of an iPhone. However, embedding Face ID under the display is a more achievable milestone and is likely to happen first.
Face ID relies on infrared (IR) light for accurate facial recognition. Unfortunately, IR light struggles to penetrate standard displays efficiently, leading to slower and less reliable performance. Apple has explored potential solutions in the past, including selectively deactivating certain pixels to enhance IR transmission, but the newly granted patent proposes a simpler and more effective method: removing specific subpixels.
Every pixel on a display consists of three subpixels—red, green, and blue—that combine to produce colors. Apple’s patent describes a method for removing selected subpixels to create gaps that allow IR light to pass through the screen.
The innovation lies in making the removed subpixels virtually invisible to the human eye. By strategically eliminating subpixels adjacent to the same color emitters in neighboring pixels, Apple can maintain color accuracy. For example:
“A subset of all display subpixels in the pixel removal region may be removed by iteratively eliminating the nearest neighboring subpixels of the same color.”
Additionally, removing the power and control lines associated with these subpixels enlarges the clear area, further improving IR light transmission. Apple also suggests that parts of the touch-sensitive mesh could be removed in the same regions to enhance IR light penetration without affecting touch accuracy.
While earlier predictions suggested that embedded Face ID would debut with the iPhone 15 or iPhone 16, neither model featured this technology. However, there’s growing speculation that it could finally arrive with the iPhone 17. Several factors fuel this optimism:
Smaller Display Cutouts: Industry analysts, such as Jeff Pu, have predicted that the iPhone 17 Pro Max may feature a significantly smaller Dynamic Island. Embedding Face ID beneath the display would be a logical way to achieve this refinement.
The iPhone 17 Air: Rumors surrounding the iPhone 17 Air suggest a design that prioritizes sleekness and minimalism. Reducing the Dynamic Island to a simple camera punch-hole aligns with this goal. Although initial reports claimed the iPhone 17 Air would be the most expensive model in the lineup, recent updates indicate pricing adjustments, leaving the release timeline uncertain.
By tackling the challenges of IR transmission through innovative methods like subpixel removal, Apple is bringing its vision of a seamless, uninterrupted display closer to reality.
While it remains to be seen whether the iPhone 17 or another future model will mark the debut of this technology. For iPhone users, the prospect of a bezel-free, single-slab design offers an exciting glimpse into the future of smartphone design.
>>>Get the best batteries for your business and professional needs here at Batteryone.co. Get in touch with us today for all your battery needs.
>>>If you want to catch more latest trending stories, please visit:Batteryone.co/blog.
Apple’s upcoming iOS 18.4 is set to bring a host of impressive AI enhancements, particularly focused on upgrading Siri. While these changes will improve the user experience across Apple devices, it’s the potential impact on the Vision Pro that truly stands out. Let’s explore why these updates are so exciting.
Siri is undergoing a significant transformation, marking the beginning of what Apple calls a "new era" for its voice assistant. This evolution, however, is happening incrementally across several iOS updates:
ChatGPT integration has been the standout improvement so far. However, its inclusion highlights Siri’s previous limitations, underscoring the need for further advancements. Fortunately, iOS 18.4 is set to address many of these shortcomings.
>>>CF40CM-3S5000-B1G1 Replacement Battery for Mechrevo CF40CM-3S5000-B1G1
Scheduled for a public release in April, with beta testing starting as early as next month, iOS 18.4 will introduce three transformative features to Siri:
If these features function as intended, Siri could finally evolve into the intelligent assistant Apple has envisioned.
>>>727459-4S Replacement Battery for XPG Xenia 16 RX
As a new Vision Pro user, I’ve quickly realized how integral Siri is to the spatial computing experience. On devices like the iPhone or iPad, Siri is useful for small tasks—setting timers, sending reminders, or performing quick searches—but on the Vision Pro, Siri plays a far more critical role:
Although Apple Intelligence isn’t yet supported on Vision Pro, it’s likely coming soon. When it does, iOS 18.4’s Siri upgrades could elevate the device to new heights:
This level of capability could unlock the full potential of spatial computing, transforming how we interact with technology.
>>>TA951F Replacement Battery for ABB 3BDH001030R0001 BR-2/3AG
The enhancements coming to Siri in iOS 18.4 represent more than incremental updates—they signal Apple’s commitment to making its voice assistant a central part of its ecosystem. For the iPhone and iPad, these changes will undoubtedly improve usability. But for the Vision Pro, they could define the success of Apple’s spatial computing ambitions.
The long-held dream of simply speaking to your computer and having it respond intelligently is becoming increasingly real. With iOS 18.4, Siri is closer than ever to delivering on its promise as the ultimate digital assistant.