Apple has rolled out its new AI-powered App Store tagging system in the iOS 26 developer beta. The feature uses AI to extract metadata from app descriptions, categories, and screenshots to assign tags aimed at improving app discoverability.
The tags are not yet visible on the public App Store and don’t influence search rankings for users outside the beta testing environment.
A recent analysis from app intelligence firm Appfigures suggested Apple had started ranking apps based on text extracted from screenshots. They assumed this was done with OCR (optical character recognition) methods.
Apple clarified at WWDC 25 that the process instead uses AI to pull out hidden insights from app metadata, beyond just the app’s name, subtitle, and keyword list. Developers won’t need to add keywords manually to screenshots or other assets to influence these tags.
Apple plans to let developers review and control the AI-generated tags before they go live. Human reviewers will vet tags to ensure accuracy.
The move signals a shift in how Apple’s App Store search algorithm may work, potentially changing how app rankings are determined once tags roll out globally.
Developers should monitor the beta and start thinking about which tags might boost their app’s findability when the system launches publicly.
Appfigures recently published more on their analysis here.