Blue eyes

(Photo by TOMMY VAN KESSEL on Unsplash)

In A Nutshell

  • Human eyes perceive up to 94 pixels per degree for high-contrast content, about 50% sharper than the 60 ppd threshold Apple’s “Retina Display” was designed around. Some people in the study reached 120 ppd.
  • Most 8K TVs are overkill. Unless sitting closer than 3.5-4 feet from a 65-inch screen, viewers won’t see meaningful improvement over 4K. Current industry viewing guidelines are overly conservative.
  • Video compression may be throwing away visible detail. Red-green color resolution (89 ppd) nearly matches black-and-white (94 ppd), challenging the widespread practice of cutting all color information in half during compression.
  • Peripheral vision loses color detail five times faster than brightness. At just 10 degrees from center, color sharpness drops nearly fivefold while brightness drops only 2.3 times—information that could help VR headsets render more efficiently.

When Apple introduced the Retina Display in 2010, the company made a bold claim: the screen packed in so many pixels that the human eye couldn’t discern individual dots. Steve Jobs declared it had crossed a magical threshold—matching the limits of human vision itself.

Turns out, that wasn’t quite true.

A study published in Nature Communications reveals that human eyes can detect significantly more detail than tech companies have assumed. Researchers at the University of Cambridge and Meta found that eyes can perceive up to 94 pixels per degree for high-contrast content like text—roughly 50% sharper than the “Retina Display” threshold. Even Apple’s latest iPad Pro, with its Ultra Retina XDR screen, delivers only about 65 pixels per degree when held at a comfortable reading distance.

“This demonstrates that the 60-65 ppd range is not the ‘retinal resolution’ for a display,” the researchers wrote.

The gap matters. For over a decade, the display industry has operated under the assumption that 60 pixels per degree represents the ceiling of human perception. That number came from the standard eye chart test—the familiar poster with progressively smaller letters. Achieving 20/20 vision means resolving details at one arcminute of visual angle, which translates to 60 pixels per degree. But the Cambridge team discovered that younger adults with healthy vision routinely exceed this benchmark. Some participants in the study could see details as fine as 120 pixels per degree.

How Researchers Measured the True Limits of Human Vision

Measuring the true limits of vision required solving a technical puzzle. Digital displays can only reproduce images crisply at their native resolution. Trying to show intermediate resolutions demands digital resampling, which introduces artifacts that contaminate the measurements.

The research team, led by Maliha Ashraf, Alexandre Chapiro, and Rafał Mantiuk, built an inventive workaround: a 27-inch 4K monitor mounted on a motorized track. Moving the screen closer increased the pixels per degree; moving it farther decreased them. The setup was actually a high-tech remake of a 130-year-old experiment from 1894, when researcher Theodor Wertheim used wire gratings on a movable frame to study vision.

Eighteen participants sat in a dark room, watching patterns flash on the traveling display. Patterns consisted of high-contrast gratings—alternating light and dark stripes wrapped in a blurred bubble. Participants identified which of two time intervals contained the pattern, and a computer algorithm adjusted the display distance based on their answers, homing in on the threshold where they could just barely detect the stripes.

The experiment tested three types of patterns: black-and-white, red-green, and yellow-violet. Researchers also tracked how resolution limits changed when participants looked at the center of the screen versus 10 or 20 degrees to the side, simulating the difference between direct and peripheral vision. To confirm the findings applied to real content and not just test patterns, they also measured thresholds using actual text rendered in both standard and dark mode formats. Text results closely matched the pattern results.

Why Your 8K TV Might Be Overkill

Practical concerns emerge when considering display purchases. The research team built a model translating their measurements into real-world viewing scenarios, and the results challenge current industry standards.

Take 8K televisions. The International Telecommunication Union recommends viewing 8K displays from between 0.8 and 3.2 display heights away. But the Cambridge model shows those recommendations are overly conservative. According to the new data, sitting farther than 1.3 display heights from an 8K screen means most people won’t perceive any benefit from the extra resolution. For a 65-inch 8K TV, that’s about 3.5 to 4 feet. At typical couch distances of 8 to 10 feet, most viewers would not see a meaningful sharpness gain over 4K.

8K Televisions likely offer little visual improvement for viewers at home.

8K Televisions likely offer little visual improvement for viewers at home. (Credit: George Trumpeter on Shutterstock)

Desktop monitors also reveal a gap between current technology and human capability. A typical 27-inch 4K display at arm’s length hovers near 60 pixels per degree—right at the old 20/20 vision standard but well below the 94-pixel-per-degree benchmark the study identified.

Phone screens present an interesting case. At close reading distance, a modern phone like the iPhone 15 nears, but does not meet, the 94-pixel-per-degree average limit for high-contrast detail.

Virtual reality faces the biggest challenge. Most current headsets render well below the eye’s central limit, which is why vendors use foveated rendering. The new data suggests tuning color and luminance differently across the field could provide additional performance benefits.

The Hidden Problem with Video Streaming Quality

One of the study’s more technical discoveries has far-reaching consequences for video streaming and image formats. Standard practice in compression assumes human eyes are much less sensitive to color details than brightness details. Nearly every video format, from JPEG images to H.265 video, reduces color information by half based on this assumption.

The new data suggests this practice needs reconsideration, at least for red and green. The achromatic resolution limit hit 94 pixels per degree, while red-green patterns came in at 89 pixels per degree—a negligible difference. Only yellow-violet patterns showed a substantial drop to 53 pixels per degree.

Current compression algorithms typically cut resolution for all color channels equally. If the red-green channel can be perceived nearly as sharply as black-and-white, current schemes may be discarding visible information. At the same time, the yellow-violet channel could potentially be reduced more than current practice without affecting perceived quality.

Your Peripheral Vision Can’t See Color Nearly as Well

Looking straight ahead delivers the sharpest vision humans can muster. As an object moves toward the edge of the visual field, the ability to see fine details plummets. The study quantified exactly how much and revealed that the drop-off differs dramatically between brightness and color.

For black-and-white patterns, resolution declined 2.3 times between the center of vision and 10 degrees to the side. But for both red-green and yellow-violet patterns, the decline was much steeper—nearly five times at 10 degrees compared to dead center.

By 20 degrees into the periphery, participants could only detect black-and-white patterns at 21 pixels per degree, red-green at 7 pixels per degree, and yellow-violet at just 5 pixels per degree.

These measurements have practical applications for VR and AR displays, which use a technique called foveated rendering. Headsets track where users are looking and render the center of vision in high detail while reducing quality in the periphery. Most foveated rendering systems only account for brightness sensitivity. The new data shows they could save even more computing power by dialing down color resolution more aggressively away from the center of the gaze.

For consumers, the gap between current technology and perceptual limits means room remains for noticeable improvements. Displays that genuinely match human acuity would make text crisper, eliminate the faint pixel grid visible on close inspection of current screens, and remove the subtle blur that viewers might not consciously notice but that affects perceived image quality.

Whether manufacturers will invest in reaching these higher thresholds depends on costs, battery life, and whether consumers will pay for the difference. Phone makers appear closest to the target. TV manufacturers have already exceeded what most people can perceive at typical viewing distances, making 8K a solution in search of a problem for living room setups. Monitor makers and VR developers have the farthest to go.

These limits reflect high-contrast content and a relatively young sample, so real-world perception will vary. But human vision has more resolving power than the industry has given it credit for. Tech companies now have a clearer target, backed by rigorous measurement rather than marketing calculations. Whether they’ll aim for it remains to be seen.


Disclaimer: This article reports on a single peer-reviewed study measuring visual resolution limits under specific laboratory conditions using high-contrast stimuli. Individual visual capabilities vary based on age, vision health, viewing conditions, and other factors. The findings represent population averages from a relatively young sample (mean age 25.5 years). Display technology assessments reflect current consumer products as of publication and are based on typical viewing distances. Readers should consult eye care professionals for individual vision assessments.


Paper Summary

Methodology

The study used a custom-built apparatus with a 27-inch 4K monitor mounted on a motorized rail system. The display could slide toward or away from participants along a 1.6-meter track, allowing researchers to change the effective pixel-per-degree resolution without digital resampling. Eighteen participants (6 female, 12 male) with a mean age of 25.5 years and normal or corrected-to-normal vision completed the experiments. Stimuli consisted of square-wave gratings with Gaussian envelopes, modulated along three color directions: achromatic (black-white), red-green, and yellow-violet. Researchers also tested with rendered text in both standard and dark modes. Participants completed a two-interval forced-choice task, indicating which time interval contained the stimulus. The QUEST adaptive procedure selected subsequent display distances based on participant responses, with 30-50 trials conducted to estimate each threshold. Measurements were taken at three eccentricities: central vision (0 degrees), 10 degrees, and 20 degrees off-center.

Results

The study found the foveal resolution limit reached 94 pixels per degree for achromatic patterns, 89 pixels per degree for red-green patterns, and 53 pixels per degree for yellow-violet patterns. These values exceeded the commonly cited 60 pixels per degree threshold derived from 20/20 vision standards. Individual participants demonstrated thresholds as high as 120 pixels per degree. Resolution declined rapidly with eccentricity: achromatic sensitivity dropped 2.3 times from center to 10 degrees, while both chromatic directions declined approximately 5 times. At 20 degrees eccentricity, thresholds fell to 21 pixels per degree (achromatic), 7 pixels per degree (red-green), and 5 pixels per degree (yellow-violet). Text recognition thresholds closely matched grating detection thresholds for both standard and dark-mode presentations, confirming the results apply to real-world content.

Limitations

The study measured behavioral thresholds that encompass the entire visual system rather than isolating specific optical or neural mechanisms. While this approach reflects real-world viewing conditions, it doesn’t identify which components (optical aberrations, photoreceptor spacing, or neural processing) limit resolution in different scenarios. Participants were relatively young (mean age 25.5 years), potentially representing better-than-average visual acuity. The study used square-wave gratings rather than sinusoidal gratings, which may have produced slightly higher thresholds due to the fundamental frequency component being 1.273 times higher in square waves. Measurements were conducted with binocular vision viewing superior and inferior visual fields, which may not fully represent acuity in nasal and temporal fields due to the complexity of overlapping visual fields from both eyes. The investigation of viewing distance effects did not reveal consistent trends across participants, suggesting individual variation that warrants further study.

Funding and Disclosures

This research was funded by a research grant from Meta. Two of the three authors (Alexandre Chapiro and Rafał K. Mantiuk) list Meta, Applied Perception Science as an affiliation. The first author, Maliha Ashraf, was affiliated solely with the University of Cambridge Department of Computer Science and Technology. The authors declared no competing interests. The study was approved by the departmental ethical committee at the Department of Computer Science and Technology of the University of Cambridge. All participants provided informed consent and were compensated for their time.

Publication Information

Ashraf, M., Chapiro, A., & Mantiuk, R.K. “Resolution limit of the eye – how many pixels can we see?” Nature Communications 16, 9086 (2025). doi:10.1038/s41467-025-64679-2. The paper was received on September 17, 2024, accepted on September 24, 2025, and published online on October 27, 2025.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply