christian [he/him, any]

  • 3 Posts
  • 64 Comments
Joined 5 years ago
cake
Cake day: September 13th, 2020

help-circle

  • The point is that “I don’t like reading” isn’t much a barrier considering people who couldn’t read managed to learn the theory.

    My point is that not being convinced that it is necessary for survival is vastly more of a hindrance than anything else we’re discussing. People only have so many free hours to allocate outside of what’s necessary to survive, they are not going to put a lot into something unless they’ve been convinced it will pay dividends. There is nothing wrong with lamenting that people don’t invest time into it, but doing that with open condescension rather than with empathy for the value of their time is also a harmful choice.


  • I get that it’s not the intention, but this post reads like praising one set of peoples as innately good and scolding another as innately bad.

    Illiterate Vietnamese, Chinese, Russian, Cuban, Laotian (etc.) peasants who were born into a level of poverty which Americans not only will never experience but will never even witness were able to learn theory while being murdered by the west and by domestic bourgeois.

    I think this might indicate that they were shown or convinced in some way that reading was important for their survival. I think that’s a little stronger of an understood incentive than anything we have in America.

    However, you’ll have to accept that when someone who knows better than you because they did read theory corrects you, you should listen to them.

    My intuition is that giving the impression that “you should defer to someone who has read more theory than you/read X amount of theory” is a belief people who read theory have is extremely counterproductive towards building this understood incentive. I think it’s an idea worth letting go of.











  • I’ve read a few good blog posts by Doctorow. There were a couple mentions of AI usage in healthcare that made the point clearly enough to be terrifying:

    The narrative around these bots is that they are there to help humans. In this story, the hospital buys a radiology bot that offers a second opinion to the human radiologist. If they disagree, the human radiologist takes another look. In this tale, AI is a way for hospitals to make fewer mistakes by spending more money. An AI assisted radiologist is less productive (because they re-run some x-rays to resolve disagreements with the bot) but more accurate.

    In automation theory jargon, this radiologist is a “centaur” – a human head grafted onto the tireless, ever-vigilant body of a robot.

    Of course, no one who invests in an AI company expects this to happen. Instead, they want reverse-centaurs: a human who acts as an assistant to a robot. The real pitch to hospital is, “Fire all but one of your radiologists and then put that poor bastard to work reviewing the judgments our robot makes at machine scale.”

    No one seriously thinks that the reverse-centaur radiologist will be able to maintain perfect vigilance over long shifts of supervising automated processes that rarely go wrong, but when they do, the error must be caught. The role of this “human in the loop” isn’t to prevent errors. That human is there to be blamed for errors. The human is there to be a “moral crumple zone”. The human is there to be an “accountability sink”. But they’re not there to be radiologists.

    Rephrased in another:

    AI radiology programs are said to be able to spot cancerous masses that human radiologists miss. A centaur-based AI-assisted radiology program would keep the same number of radiologists in the field, but they would get less done: every time they assessed an X-ray, the AI would give them a second opinion. If the human and the AI disagreed, the human would go back and re-assess the X-ray. We’d get better radiology, at a higher price (the price of the AI software, plus the additional hours the radiologist would work).

    No one who invests in an AI company believes that their returns will come from business customers who agree to increase their costs. The AI can’t do your job, but the AI salesman can convince your boss to fire you and replace you with an AI anyway.

    In hindsight it’s an obvious point that this is how AI would be used in capitalistic healthcare, but it’s pretty natural to want to default to the pitch because it genuinely could improve healthcare rather than make it worse.