In the last 5 years, an obscure Chinese company called Transsion, selling brands like Tecno, Infinix and Itel, has quietly become the largest mobile phone maker in Africa. Why? Because their phone cameras are tuned out of the box to work well with dark skin.
Looking at their demo real cor the Camon 18 camera, this seems like the perfect example of universal design. It capture beautifully color-balanced dark skin, and does so right along side pale skin and vibrant colors.
I won't trust the software too much, sadly. Say, Xiaomi used to make beautiful smartphones, and was also found to have spying and remote control code in the firmware.
I don't think that the smartphone makers are evil; the state forces their hand even if they aren't.
not to go too deep, but there is a qualitative difference behind the sort of "suspicion that everything is spying on you" and "it is very clearly legislated and discussed that certain spying is happening on devices X/Y/Z".
It's the difference between "let's use signal to communicate" while using an android or like... "let's go over this through email" when taking notes on a criminal conspiracy. Sure, your unpatched android probably has spyware on it too, signal might have issues, but you're guaranteed to have a lot of trouble with emails sitting on some server.
I see what you want to say. But to be honest, from a personal point of view as an european with an irrelevant job, I'm very much less concerned of being spied on by China or Russia than by Google or Facebook. I'm sure the "western" companies are much more likely to betray me to a suddenly hostile local government.
Color films used to come in two varieties, more or less. Films designed to represent caucasian skin tones in a desirable way, and films designed to make colors that pop. Fuji had 160C and 160S, while Kodak had Portra 160NC and 160VC. I think they tweaked something in the sensitivity curves of the “suitable for skin tones” product lines which reduced contrast in the relevant parts of the curve. My sense is that this would keep it looking peach-colored without going into the more brilliant reds or magentas. The sensitivity curves for “makes colors that pop” were tweaked to increase the contrast in certain places. This was great for nature photos, product photos, or photos of food, but but if you put caucasian skin in the picture, it might come out looking somewhat reddish or magenta, and skin blemishes stand out more. Worst case you get crab people.
This is just my perspective on the film itself, based on my experiences shooting and printing color film (in a wet lab). I was printing slowly, trying to get the best out of each print. I spent a lot of time looking at prints and sussing out variations in color, stuff that you could fix at the touch of a button in Photoshop, if it were digital.
Worth noting that Porta was a fairly recent and 'modern' film development, and showed up on the market long after the events depicted in the story. Film used to be split into if it was balanced for daylight or tungsten lights
Still do, although there aren't that many color films left. Portra 160 these days is like the old 160NC and Ektar is like a more-saturated VC (or a higher contrast, negative Ektachrome). Both run about $6 per 4x5 sheet, so I don't work with them much.
To make something "just work", you have to make technical decisions for the user. To make those decisions you need assumptions. It's a lot of work to make assumptions that work for every user, so a lot of times people focus on making them work for the "target market": those who we feel will most effectively sustain the product, who tend to be those we're most familiar with, or those with the most social privilege or economic clout. But we shouldn't stop there.
All of this was true 100 years ago when Kodak was designing cameras, and it's true today as we build machine learning models.
Could also just be awareness of the limitation of the film. You're not going to send a card out to every lab in the country that looks crap.. And they still need caucasian skin to look good / its their biggest market.
As a dark skinned photographer I always am annoyed by the take that “white people who designed cameras made them take better photos of white people.” No, that’s not what is happening. The fact is that you get a lot less light from a dark face, and the nature of light sensing technology means that it’s going to be harder to take a picture of a dark face holding all else constant. Of course you can use exposure compensation etc but the “white supremacy” hot take is very misinformed.
The difference in exposure is perhaps part of it, but the reality is that film at the time couldn't accurately represent darker skin tones. It's not even just about exposure, but color itself wasn't accurate.
That's compounded by how much lighting education is based around white people with blond hair.
This talk from SIGGRAPH 2021 is excellent at going into the history of film and lighting and how it favors a very specific look
The color was not just inaccurate for black skin tones, but for everything, though. So it seems likely it was a technical limitation, not racism. At first there was no color film at all, only black and white photography. People seem to take color film for granted but it was a genuine technical invention.
There is no such thing as accurate colours when you create a picture on paper. It is simply not possible.
What you are doing is making tradeoffs of what looks good and what doesn't. And the tradeoff that was made was that white skin needed to look good and black skin didn't matter.
Isn't the issue rather with the background that is woo white (at least in the sample photographs of the article)? How do you even make black skin look bad?
A few issues with this. (1) it can be both. Disadvantages of dark skin tones combined with apathy/lack of effort to address it. (2) It can be an effect of systemic racism without being an intentionally conscious effort at white supremacy. (3) People did tune the technology specifically to improve for their main subjects, who were white. Again that's not to say they intentionally made them worse for others, but that was still the outcome. (edit: 4) do you disagree with anything specific in the article?
My understanding of the argument is that institutional racism means all the white people simply didn’t think about photographing dark people. Not so much “omg evil they purposely excluded blacks!!” and more “wow it didn’t even cross their minds as important, if there were more black people in those positions they’d probably think of that”
ie. it’s about using more inclusive test cases and acceptance criteria for engineering projects
Edit: then again they could still decide to make the tradeoff to “ignore” 13% of the population in their products if the extra engineering required increased costs/time by enough
I think essentially that's what this boils down to. Color slide film of the time, like Kodachrome, had a narrow range of exposure. I see no reason to believe this was anything but a technical limitation. A slide film with more stops of exposure would have been a huge commercial success.
Given that narrow range, you could either expose for lighter or darker skin tones. And this is where institutional racism creeps in. In a group where everyone had similar skin tones, you wouldn't face an issue exposing the film properly. But where there was a variety, it was commonplace to treat exposing for the lighter skin tones as more important. The group with darker skin was treated as if it didn't matter, or mattered far less.
I think this is the key idea. Even rejecting the idea that Kodak and other film companies made the choice to exclude a group of people, they created a default, an inertia, which had to worked against to be equitable. My mind keeps coming back to the Pareto principle. In pursuing the 80% of customers might be the wealthier group, the easier to convert, the most likely to use your product, is it often the same %20 who are left out and don't they matter?
That's a fair criticism. You could make the case that the film companies didn't care enough to test the film's color reproduction for anything but caucasian skin tones, or didn't put the research dollars into fixing the problem. At the root of that would be racism.
I like the term this presentation took for situations like this with "anti racist research"
https://youtu.be/ROuE8xYLpX8
Like you said, at the root of that is racism, but the presentation frames it in an interesting way. Not everyone who takes part in this research is racist, but they're the products of racism in society.
So color not being accurately reproduced could either be a direct product of racist people or the result of people in a society that doesn't value everyone equally.
But I like the wording of "anti racist research" because it pushes the onus to actively trying to analyze if things are fair, rather than assuming they are. Even people who aren't actively racist can benefit their research from trying to push past the biases inherent in society that permiates everything
I (an outsider), think you're about half right. Most photography until relatively recently was just dumb tech. Decide an exposure and go. It just so happened that it worked out well for lighter skinned people, and less so for others.
In that I mean, I don't think the companies were ever purposefully blind to darker skins - just stuck with it and said 'good enough ship it.' It also makes me wonder what if a darker skinned person invented the tech. Would they say it's garbage? Did that give light skinned people an advantage? Interesting thoughts.
Even today this is only accomplished on phones by HDR to the point of making every photo look fake. Apparently photography is hard, it must be if we are still here with this problem... which makes me appreciate my own eyeballs even more.
> just so happened that it worked out well for lighter skinned people
Centuries of collective effort by engineers and chemists went into these photographic technologies. If photographs of wealthy people in Rochester, NY in the first half of the 20th century had looked terrible, they would have kept tweaking the technology until they fixed it.
The problems being discussed are more than just the ability of film at the time to capture things , it's the education around it. Shirley cards and books around that time were built to normalize white skin and blonde hair. It wasn't even considered a problem at the time that black skin wasn't reproduced well until furniture and chocolate manufacturers complained that their products weren't well shot.
There's both a technical and a societal element to it.
I suggest watching this video , especially from the 25 minute mark
I think this is incredibly trivializing of both the tech and art.
So getting exposure isn't just an arbitrary value. It's relative to tons of things, so it's always been possible to shoot darker skin.
The issue wasn't just exposure though. It's actually quite possible to shoot darker skin in lower light. You just need to know how to light it and capture the skin. (E.g darker tones are more specular on film)
The issue was color calibration and education. Film was designed to highlight certain tones, lighting setups were geared towards certain demographics. Those were the demographics that a segregated society preferred to show, and in turn made it so darker skinned demographics weren't equally visually representable.
You absolutely do not need HDR photography to shoot darker skin with lighter skin. In fact most cameras have enough dynamic range today to deal without an HDR capture.
>So getting exposure isn't just an arbitrary value. It's relative to tons of things, so it's always been possible to shoot darker skin.
In photography, exposure is the amount of light per unit area (the image plane illuminance times the exposure time) reaching a frame of photographic film or the surface of an electronic image sensor, as determined by shutter speed, lens aperture, and scene luminance. Exposure is measured in lux seconds, and can be computed from exposure value (EV) and scene luminance in a specified region. [1]
Image lightness can be affected by both exposure and ISO. But since film had a reduced dynamic range, the truth is it can't reproduce reliable both dark tones and skin tones at the same time. It's a technical limitation.
>You absolutely do not need HDR photography to shoot darker skin with lighter skin. In fact most cameras have enough dynamic range today to deal without an HDR capture.
Actually if you have two individuals side by side, one with dark skin, one with white skin you can't shoot both at the same time and have both look good if the lighting is the same.
Cameras can record at most 14 - 15 Evs of dynamic range. The difference between the light falling on those two persons is more than that.
You absolutely can shoot two people with different skin tones side-by-side, but it requires some skill.
I invite you to look through various press photographs of Martin Luther King Jr. meeting Lyndon Johnson. Some of them do a very good job of capturing both men; others don't.
It is going to be incredibly difficult to get two people , in the same light, to be that many stops apart just by virtue of their skin if you're correctly exposed.
If you're under or over exposed, someone might clip out sooner , but a well exposed image will have enough latitude to capture both.
> You absolutely do not need HDR photography to shoot darker skin with lighter skin. In fact most cameras have enough dynamic range today to deal without an HDR capture.
We may be conflating techs(my fault). I'm mostly interested in digital, so I can't defend much about film.
I know of no phone camera that can do what you say. As a diverse family, either I'm blown out, or my wife looks like a chunk of coal. Even her Fuji digicam behaves this way.
It does require quite a bit of processing, that's true, especially on camera phones.
iPhones and Pixels have gotten a lot better at optimizing images for multiple ethnicities, but not everyone has the same processing accumen.
On the standalone camera, shoot RAW if you can. If you're shooting jpeg, you're beholden to the cameras signal processing which can still be quite biased to certain tones. It'll require you to process the image yourself, but you'll have access to a lot more range than the jpeg output.
>It does require quite a bit of processing, that's true, especially on camera phones.
If you mess with the curves by raising the shadows or lowering the highlights, the image will look strange.
If you photograph something with a large difference between lights and darks and want to convey as much information as you can, you have to use some additional lighting, like speedlights, studio lights, reflectors.
In the 99pi episode they talk about how Kodak released better film in the 70s/80s that could take good pictures of dark people and marketed it as “Able to photograph the details of a dark horse in low light”. Widely considered a coded phrase for black people.
Marketing speak, no matter how poor taste. I mean, how long have laundry soaps been saying they make whites whiter? At my age, soaps should be making plasma if these advances held any credence.
That aside...my wife is a dark skinned person. That I cannot take a normal photo of her without blowing out the background is infuriating. Think a simple sunset pic. If I just snap, she looks like a black blob. If I click her face, she looks normal but standing in front of a nuclear blast.
Not because I think any company is racist, but because I wonder why we are still stuck with it. If our eyes see people in a way, and have for all of time, why in the age of nanobots and ai and everything else still failing so badly?
Capturing bigger differences between highlights and shadows (i.e. dynamic range) is one of the primary goals of camera manufacturers. It's one of the most critical aspects of camera performance, and they're competing to deliver better dynamic range.
So, not being argumentative but curious, and also rather uneducated here...why?
Where is the big disconnect in that our own eyes can do these things so easily, and no camera can? It's not any insult to cameras, but a genuine question.
Our eyes are not taking a single still photo, they’re taking in real life in real time. Our eyes constantly adapt to changing conditions as we look around a room. This capability is finite though, as anyone who has walked out of a dark room into the midday sun has experienced.
The limitation in all photographic technologies has to do with the fundamental light gathering mechanism. In the case of photographic film, it is grains of photosensitive chemicals such as silver halides. In the case of digital cameras it is photosensitive semiconductors called CCDs or CMOS sensors. These sensors (chemical and digital) operate by collecting photons during the exposure. In either case, and with our eyes too, they can only collect so many photons before they “fill up.”
In the case of film, this meant enough photons had arrived at the grain of silver halide to provide it with the energy needed to cause the reaction to take place. This reaction causes an irreversible change in the colour of the grain. In the case of digital sensors, when enough photons arrive the device stores its maximum possible charge which is then read out and converted to a number (usually the maximum number for that integer type).
So then why can’t we make photographic technology that can gather a larger number of photons before “filling up”? It’s a tradeoff. The larger we make the (sensors or film grains) the more capacity for light gathering (called dynamic range) they have, but the lower the resolution they have. Unfortunately, in the world of consumer devices, resolution (megapixels) is king.
The fundamental problem we’re dealing with in nature is that the dynamic range of real life is many orders of magnitude. Outdoor midday sunlight can be something like a hundred million photons per second per pixel whereas night time under starlight that falls to less than one photon per second per pixel.
The short answer is that human eyes have really good dynamic range (around 24 bits). For comparison, most monitors only have 8 bit pixels, and the best cameras have around 16 bits. That means that human eyes are much better at perceiving combinations of bright and dim areas at the same time.
Our eyes aren't actually 24 bit per frame. In fact our eyes are pretty shit. What they're good at is fairly quickly moving and adjusting, and are backed by a processor (our brain) that is capable of converting that data into a spatially aware image.
In fact our brain doesn't even really see much of the view. It's filling in a lot of blanks based on the understanding of the scene.
Cameras are trying to capture that into a still. You can capture multiple stills together with different exposures, but you're going to battle things like motion etc..
Most cameras these days are in the 10-14 bit range which is a function of the range the photosites will respond too before saturating, but also given that most displays are 8 bit, after a point there's just diminishing gains.
14 bits allows for a lot of latitude when shooting and processing.
I'm not an expert on this topic, but I'm certain it has to do with space constraints.
Supporting more range of color depth means that the raw output of the sensor is necessarily bigger. This means we have some range of tradeoffs to make in the camera:
* Save the sensor output directly as a RAW image onto storage. This means your storage medium can store fewer photos. You're also now beholden to the speed of your storage. Specialty photography like wildlife and sports impose requirements like being able to take lots of pictures per second; cameras solve this by having large memory buffers. Slow storage means the buffer fills up quicker.
* Compress your RAW images with lossless algorithms. These do a decent job, but they still result in big files on storage media. Same problems as earlier point. Need quite a bit of memory and CPU on the camera to deal with this, and plus this now competes with how many shots per second you can buffer.
* Compress your images with lossy algorithms (output RAW or JPEGs). Pretty much unacceptable since you don't get to decide what detail the algorithm removes.
Of course, this also imposes hardware demands on the photographer's computer. Big images require:
* Lots of storage. Maybe time to pay for a lot of cloud storage?
* Fast storage too because otherwise Lightroom takes forever to load an image
* Lots of RAM so that Lightroom can cache more images at once so you can flip back and forth with more ease.
* Fast RAM and CPU so that Lightroom can do adjustments without much lag (some of this can be offloaded to GPU, but it's not a drastic improvement).
Now, I think these problems are solvable today. They just require obscene amounts of money.
I do want to add, RAW images by today's consumer-level cameras do support a fairly wide dynamic range. A skilled photographer will know how to work with RAW files to bring out details from the extreme ends of the range (e.g. dark skin against a bright background). But it does require setting up the shot well, and one can only capture so much range (e.g. I have a few bird pictures where part of the bird is still totally blown out and unrecoverable because of sunlight).
Also, I've done some more search and the 24 bit figure for the human eye is somewhat disputed. It looks like the actual figure is around 16 stops at a time, but our iris gives another 8 or so stops (but only after an adjustment period).
Part of the answer for why dynamic range on modern cameras isn't better is that we're only starting to see consumer grade displays with more than 8 bits. Given that, most camera makers would rather put increased image quality resources towards upping the frame-rate or pixel density, which make it harder to get high dynamic range.
> cannot take a normal photo of her without blowing out the background
If your wife’s skin is significantly darker than the background and your photographic process doesn’t have the dynamic range for both, you can e.g. use fill flash, bounce extra light off a large reflector onto her face, use a filter over the lens (works well for taking B&W images in some situations), or take a picture using a process with better dynamic range (and optionally play with the image when printing, either lightening specific parts or applying some global lightness curve; e.g. you can take 2 pictures with one exposed for the foreground and the other exposed for the background and blend them in software).
> Think a simple sunset pic. If I just snap, she looks like a black blob.
This happens to a substantial extent to people of all skin tones, though the darker the skin the harder it gets. The sunset is much, much brighter than foreground objects. Try fill flash (and if the results are poor, try to figure out how to position your flash somewhere further from the lens, e.g. by using a separate flash).
I much appreciate the advice, it's not lost on me. I guess I'm just confused why we cannot solve this problem today in a normalish way, as our eyes see things. And more confused that we call tech from 50 years ago racist because it could also not achieve it.
I think our brains use a lot of visual processing to make that work. Similar to how modern phones use AI object detection to identify important parts of low light photos and use different lighting params for different parts.
My iphone 13 for example visibly highlights my face in a backlit selfie a few seconds after taking the photo.
Your body's visual processing system (eyes and brain) are the result of millions (maybe billions) of years of evolution.
Photography and digital sensors are less than 200 years old.
The film technology 50 years ago was not racist. The people who designed that film technology didn't care enough to put the effort into developing film technology that could discern multiple shades of darker skin tones. It is not direct racism like actively trying to take away a black person's right to vote. It is indirect racism of not caring enough about black people to try to make film record their skin tones better. Once the furniture and chocolate industry showed up with the money to have brown tones recorded better on film suddenly Kodak was interested.
It's not even that they didn't care. It likely never crossed their minds because of how segregated America and specifically Rochester was at that time.
It's an example of Moravec's paradox, in which something that seems intuitive, simple, and utterly effortless to us turns out to be massively difficult to do in software. In this case, it's a matter of raw processing power and Moore's law eventually catching up to the dynamic range of human vision, but there are sporadic deep learning attempts to tackle it.
The size of sensors is decreasing, and currently you can recreate the performance of that 2012 tech with a sensor about 20% its size but about the same price.
The problem with cameras is this:
You need more sensors crammed into smaller space (megapixels) with higher resolution of the range of frequencies each sensor captures (dynamic range,) and/or a higher rate of sampling per sensor to improve the quality of images (framerate.)
The chips used are already running at maximum speed, so the size of a sensor is constrained by how many pixels can be processed at framerates expected by consumers, usually 30 to 60 fps for video.
You can adjust the dynamic range using physical color filters in the lenses, removing all infrared for example. Then the sensors will only trigger on visible light, and you can tweak the sensors to bethere are other tricks used to optimize what's being captured. The issue is sensitivity - the number of photons hitting a sensor required to activate it. Darker colors reflect fewer photons, so lighter colors "overwhelm" the sensors. Even multiple samples can't overcome some of the limitations, among which is certain conditions in which black skin shows up weirdly in video and images (to the perennial frustration of digital photographers, and a frequent topic of complaint amongst pornographers.)
This latest high end sensor has between 18-20 bits of dynamic range, or 120db resolution. This is getting close to human performance, and if the world doesn't fall apart in the next ten years, there will be human equivalent hdr cameras on phones, and there won't be a camera problem for darker skinned people anymore.
In the meantime, if your camera can save photos in .raw format, it's the unprocessed output from the sensor itself. You can process the raw pictures with desktop applications like photoshop to correct the colors, and turn those sunset photos of your wife into memories instead of disappointments.
https://en.wikipedia.org/wiki/Raw_image_format
There's a whole slew of tutorials and applications for correcting issues with skin tones and different lighting conditions. There's a bit of a learning curve, but something most people can accomplish in an afternoon.
The article didn't say supremacy. I think the closest it came was bias. It cited a former manager of Kodak Research. And 1 of its points was holding all else constant was part of the problem.
I think it's certainly possible to imply something without saying it. But I think it's important that we don't assume that anybody who doesn't say X is implying X just because they talk about Y which other people use to imply X. Otherwise it becomes impossible to talk about Y.
If you think in this specific case that the authors are implying X by saying Y, then you need to explain why you think that.
I only skimmed the piece but from my skim it had some undertones of "institutionalized racism in the camera industry".
In particular:
> ... And that’s because, even if we think of the camera as a neutral technology, it is not. In the vast spectrum of human colors, photographic tools and practices tend to prioritize the lighter end of that range. And that bias has been there since the very beginning.
And that bit is in the intro portion of the article, making me think that it's part of the premise that the camera is "not a neutral technology".
You are right, in that the article doesn't see camera (or any technology really) as neutral.
For those not familiar with 99pi, the focus is on how makers/designers make choices that can often be invisible to us, but influence the environment we live on. This is a fantastic podcast I would recommend to anyone curious about the world we live in. It has biases, but they are stated and argumented, not just thrown around irresponsibly.
In this specific case, there was two specific angles:
- the film industry as a whole didn't really care about diversity, and film dynamic range was deemed good enough if it could represent lighter flesh tones and bright surroundings. Kodak exec is quoted on this subject as really not giving a damn about darker skins, and their rationale for making efforts to expend dynamic range a lot further was to allow better representation of white and dark chocolate products, among others.
You might disagree with that take, but it all comes from Kodak employees from that area, as far as I understand.
- the other was on print calibration: whatever you did with your camera, printing at a Kodak shop would be done with presets setting adjusted for light tone skins (the "Shirley cards" of the title are the photos of white girls used as reference)
Here again, the info comes from technician working on printing at the time.
PS: Then there's more stories on group photos, light meters etc. if you care at all I would recommend listening to the whole thing. I also do photography (as an amateur) and it was still interesting to me.
The issue is, you might get a thousand reader interested in the physics and chemistry of film in a specialty magazine about photography. And the oral history of black photographers of that era and the tricks that they used to capture these hard to get ranges of color.
Or, if you write it online and blame racism, you can get 100'000 readers once you get someone majoring in liberal arts to share it.
A typical camera can take a well exposed photo indoors under natural light and outdoors in full sun. The former is about 50 lux, and the latter is 100,000.
It seems unlikely to me that the amount of light reflected off darker skin fundamentally confounds cameras when they can handle four orders of magnitude of illumination variation gracefully.
This makes no sense given what it says in the article: that the problem more or less went away once black consumers started complaining and Kodak changed their film chemistry.
The article says that Kodak changed their film chemistry _not_ because of they were listening to the complaints from people of color, but rather because they were going to lose the business of two big professional clients: a chocolate company, and a furniture company.
Yeah I thought it was really interesting, spoiler alert - in a random sequence of events Lena, the model from the JPEG study ends up becoming a Kodak Shirley girl too.
> One of the Polaroid ID-2’s most important design features was a “boost” button that when pressed would boost the flash exactly 42%. Polaroid advertised this spectial feature for general lighting pruposes. However, researchers and artists assert that the ID-2 camera was actually created for and catering specifically to South Africa’s policies of Apartheid., and boost button, The white minority South African government largely used this camera for dompas, or the passbooks, which helped to sustain the Apartheid regime via surveillance. This was partially because the device was portable, fast at taking and developing photos, and created difficult to forge images because of its powerful lamination. But the most compelling feature was that the boost button increased the flash’s intensity by the exact amount it took to account for the extra light absorbed by black skin: 42%.
> Polaroid claimed only 20% of the film they sold in South Africa ended up being used for passports and according to Polaroid in 191, only 65 systems were sold before sales were stopped, and none of those systems were sold to government agencies. However, the Polaroid Revolutionary Workers Movement countered that sales were still going through indirect channels. Polaroid continued lying from 1971 to 1978, claiming that they had ceased supplying materials to the regime, when in fact there was an elaborate shell game which allowed them to sell through a third party (Caulfield 2015).
Google's main marketing slogan for their new phones is that it is good at taking photos of different skin tones. Makes me wonder how diverse most people's acquaintances really are, to make it a selling point.
I watched a video recently by a reviewer who had taken 1000 photos with both a Pixel 6 and iPhone 13 Pro. The difference was really quite striking, the Pixel 6 was much better at handling faces of different skin colours. This appears to be handled computationally, by detecting faces in the frame and then adjusting appropriately.
I can not look at the english page for it anymore, because of Google's automatic forwarding rules. Maybe they changed the marketing by now? I'm pretty sure it was marketed for all skin colors, not just darker skin. And the talk about talking diverse photographs was the first "marketing section" after the tech specs. Yeah they also announced they have a "tensor processor" now. I don't think that was/is an actual selling point.
I didn't reger to the video ad, but to their web site. I'm saying their main selling point was (and is) that the camera does well with different skin tones.
Was it really an issue before, though? The article mentions that digital cameras were already better at taking photos of darker skin. I never noticed that photos of black people were of bad quality on the internet.
If you tune up your camera to "see what is there" people complain that their blemishes are too visible.
So generally the processing chain is tuned up to be more flattering.
I took a photo of my son who had acne at the time and did the opposite and made it look like he'd smoked 10,000 cigarettes. He and I thought it was a good photo artistically but I wouldn't show it to his grandmother.
What's flattering is going to depend on your skin type. People with very light skin have bad problems in this respect.
Auto-exposure is also an issue. It assumes the scene is middle gray and if the objects in the scene are unusually light or dark it will get it wrong.
I took a photo of a friend who was wearing a black gothic dress and the +2 ... -2 exposure dial was not enough to get her face right and make the dress really black and I had to go to full manual.
Now that I think about it, I haven't taken a lot of portraits of black people and I think I want to take some to learn how.
Kodak got into a lot of trouble over the color rendition of their films in that time period.
For one thing they refused to publish data about how the colors faded over time, and the truth was awful.
Wedding photographers were taking a good chunk of change to make a product that should last a lifetime and instead the prints faded quickly. Everything from the most ephemeral color snaps to art prints was affected, they didn’t make any product that would last unless you kept it in a dry freezer.
Physics is racist. Lighter objects reflect more light than darker objects.
Film and sensors can not capture the dynamic range between something very light and something very dark. Whether those subjects are humans or not. If you photograph a brown bear near a polar bear, you would have difficulties discerning details in both.
The maximum of dynamic range that can be capture by common cameras these days is 15Ev.
Even photographing light skinned people into sunset or dark skinned people at night is an issue, and you have to use additional light like flash or a reflector if you want details in both the persons face and the surroundings.
And yet, Kodak managed to tune the gamma curve of their film stock to be more flattering to dark objects due to pressure from the chocolate and furniture industries.
Race still provides a major financial benefit. Its what - 30%+ of the US population thats not caucasion / very light not white skin? And thats before you look at the international market.
While white is going to be the priority you don't want to fall behind competitors on film that does better accross the board. Not only do you lose the dark skin market but also potentially professionals like school/wedding/corporate photographers who need to photo everyone and don't want to faff around with different films for different photos.
https://qz.com/africa/1633699/transsions-tecno-infinix-camer...