Migrated from rainynight65@feddit.de, which now appears to be dead. Sadly lost my comment history in the process. Let’s start fresh.

  • 0 Posts
  • 33 Comments
Joined 4 months ago
cake
Cake day: June 24th, 2024

help-circle
  • Good on this kid for going to such lengths to verify his hypothesis and show a serious weakness in railway infrastructure. I hope he goes on to become a serious railway enthusiast and advocate for safe, efficient rail.

    However, there are way too many factors in the number of derailments and safety incidents in US rail operations to pin them down to this one issue. Once the major operators embarked on a journey to squeeze more and more money out of the business, a lot of things happened. Trains became longer - excessively so. Used to be that a train 1.4 miles long was considered massive. These days they are the norm. Can you imagine a train so long that, in hilly terrain, sections of it are being dragged uphill while other sections are pushing downhill?

    Reductions in staff, motive power fleets and maintenance have led to trains being badly composed, with loads being distributed in a less than optimal way. An old railway man once told me that the only time he broke a train was when he, in a rush and under pressure, agreed to attach a rake of fully loaded freight cars to the end of a train of empties. Unequal load distribution played a role in a number major derailment incidents, among them a derailment in Hyndman, PA, which required the town to be evacuated for several days.

    ProPublica have a series of articles regarding rail safety, and specifically one about the dangers of long trains. So while the worn out springs certainly don’t help, they are only one of many things that are impacting rail safety, and probably not even the lowest hanging fruit.


  • Thing is, I am actually Gen X. Early even. And I look at the Boomers and see the generation who kept pulling up the ladder. They got free education and privatised it to make it expensive for us. They got free healthcare and privatised it to make it expensive. They got into the housing market for cheap and started using it as an investment and speculation vehicle, making it harder for each subsequent generation to get into it. They were pretty much the last generation in which it was possible to raise a family on a single income. Climate change is front and center of mind in my generation, we’ve known for over 30 years what’s coming. When you look at those who most fervently oppose climate change action - all old fogeys, and I say that being very conscious of the fact that I am approaching ‘old fogey’ status from the perspective of Gen Z and Gen Alpha.

    I can only imagine how todays teenagers and young adults feel…






  • Equally then, the nuclear disasters shouldn’t count, right?

    Deaths from an accident at an active nuclear power plant are not the same as deaths caused by a burst dam that was originally intended to produce electricity one day, but has never produced any. Especially if you call the statistic ‘Deaths per unit of electricity production’. At the time of the accident, it was just a dam, construction of any hydroelectric facilities was nowhere near beginning, so calling it a ‘hydropower accident’ is highly debatable (probably as at least as debatable as calling nuclear ‘conventional’). Without the inclusion of those deaths, hydro would be shown to be even safer than nuclear, given that it has produced nearly twice as much electricity in the time span covered by those statistics, while having caused a similar number of deaths (if you continue to ignore the increased miner mortality, otherwise nuclear will look way worse). The article also does not cite how they determined the number of 171000 deaths, given that estimates for the Banqian dam failure range between 26000 and 240000. The author mentions (but does not cite) a paper by Benjamin Sovacool from 2016, which analyzes the deaths caused by different forms of energy but, crucially, omits the Banqian dam death toll. I will try to get hold of that paper to see the reasoning, but I suspect it may align with mine.

    How do you assume it’s ignoring their increased mortality?

    The article makes zero mention of any such thing, and the section about how the deaths are calculated (footnote 3 in this section) only calls out the deaths from Chernobyl and Fukushima. Direct quote from the footnote:

    Nuclear = I have calculated these figures based on the assumption of 433 deaths from Chernobyl and 2,314 from Fukushima. These figures are based on the most recent estimates from UNSCEAR and the Government of Japan. In a related article, I detail where these figures come from.

    No mention at all of any other deaths or causes of death, nothing whatsoever. It’s the deaths from two nuclear accidents, that’s all. The figures from the cited study alone would multiply the number of nuclear deaths in this statistic. What’s worse, the author has published another article on nuclear energy which essentially comes to the exact same conclusions. But if you include deaths from a burst dam that has never produced electricity (but was planned to do so eventually), then you must include deaths among people who mine the material destined to produce electricity in a nuclear plant.

    To me it simply looks like the author of this article is highly biased towards nuclear, and has done very selective homework.






  • Sure, training data selection impacts the output. If you feed an AI nothing but anime, the images it produces will look like anime. If all it knows is K-pop, then the music it puts out will sound like K-pop. Tweaking a computational process through selective input is not the same as a human being actively absorbing stimuli and forming their own, unique response.

    AI doesn’t have an innate taste or feeling for what it likes. It won’t walk into a second hand CD store, browse the boxes, find something that’s intriguing and check it out. It won’t go for a walk and think “I want to take a photo of that tree there in the open field”. It won’t see or hear a piece of art and think “I’d like to be learn how to paint/write/play an instrument like that”. And it will never make art for the sake of making art, for the pure enjoyment that is the process of creating something, irrespective of who wants to see or hear the result. All it is designed to do is regurgitate an intersection of what it knows that best suits the parameters of a given request (aka prompt). Actively learning, experimenting, practicing techniques, trying to emulate specific techniques of someone else - making art for the sake of making art - is a key component to humans learning from others and being influenced by others.

    So the process of human learning and influencing, and the selective feeding of data to an AI to ‘tune’ its output are entirely different things that cannot and should not be compared.


  • Generative AI is not ‘influenced’ by other people’s work the way humans are. A human musician might spend years covering songs they like and copying or emulating the style, until they find their own style, which may or may not be a blend of their influences, but crucially, they will usually add something. AI does not do that. The idea that AI functions the same as human artists, by absorbing influences and producing their own result, is not only fundamentally false, it is dangerously misleading. To portray it as ‘not unethical’ is even more misleading.



  • These people never walk back their bullshit. When called out on it, they will double down. When proven wrong, they will change the topic. But they need to be seen as strong, and right. Admitting that you’re wrong or even apologising is neither - it’s weak, and it can create doubt. If they were wrong about this, then what else are they wrong about?

    They radicalise their followers with lies and falsehoods, and they can only keep that up if they are not seen as being wrong about what they say. They spread their lies with confidence and zeal, and if reality disagrees, then reality is wrong.