• 1 Post
  • 251 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • I don’t read it as magical energy created out of nothing, but I do read it as “free” energy that would exist whether this regeneration system is used or not, that would otherwise be lost as heat.

    With or without regenerative braking, the train system is still going to accelerate stopped trains up to operational speed, then slow them down to a stop, at regular intervals throughout the whole train system. Tapping into that existing energy is basically free energy at that point.




  • That article has basically been validated over time. At the time it was written, the argument was that monopoly is bad for consumers even if it makes prices cheaper, and that consolidation of producer market power needs to be understood as consumer harm in itself, even if prices or services paradoxically become better for consumers.

    It’s no longer a paradox today, though. Amazon has raised prices and reduced the quality of service by a considerable margin, and uses its market power to prevent the competition from undercutting them, rather than competing fairly on the merits.



  • Unless you are fine pairing solar panels with natural gas as we currently do

    Yes, I am, especially since you seem to be intentionally ignoring wind+solar. It’s much cheaper to have a system that is solar+wind+nat gas, and that particular system can handle all the peaking and base needs today, cheaper than nuclear can. So nuclear is more expensive today than that type of combined generation.

    In 10 years, when a new nuclear plant designed today might come on line, we’ll probably have enough grid scale storage and demand-shifting technology that we can easily make it through the typical 24-hour cycle, including 10-14 hours of night in most places depending on time of year. Based on the progress we’ve seen between 2019 and 2024, and the projects currently being designed and constructed today, we can expect grid scale storage to plummet in price and dramatically increase in capacity (both in terms of real-time power capacity measured in watts and in terms of total energy storage capacity measured in watt-hours).

    In 20 years, we might have sufficient advanced geothermal to where we can have dispatchable carbon-free electricity, plus sufficient large-scale storage and transmission that we’d have the capacity to power entire states even when the weather is bad for solar/wind in that particular place, through overcapacity from elsewhere.

    In 30 years, we might have fusion.

    With that in mind, are you ready to sign an 80-year mortgage locking in today’s nuclear prices? The economics just don’t work out.



  • With nuclear, you’re talking about spending money today in year zero to get a nuclear plant built between years 5-10, and operation from years 11-85.

    With solar or wind, you’re talking about spending money today to get generation online in year 1, and then another totally separate decision in year 25, then another in year 50, and then another in year 75.

    So the comparison isn’t just 2025 nuclear technology versus 2025 solar technology. It’s also 2025 nuclear versus 2075 solar tech. When comparing that entire 75-year lifespan, you’re competing with technology that hasn’t been invented yet.

    Let’s take Commanche Peak, a nuclear plant in Texas that went online in 1990. At that time, solar panels cost about $10 per watt in 2022 dollars. By 2022, the price was down to $0.26 per watt. But Commanche Peak is going to keep operating, and trying to compete with the latest and greatest, for the entire 70+ year lifespan of the nuclear plant. If 1990 nuclear plants aren’t competitive with 2024 solar panels, why do we believe that 2030 nuclear plants will be competitive with 2060 solar panels or wind turbines?


  • I don’t think that math works out, even when looking over the entire 70+ year life cycle of a nuclear reactor. When it costs $35 billion to build two 1MW reactors, even if it will last 70 years, the construction cost being amortized over every year or every megawatt hour generated is still really expensive, especially when accounting for interest.

    And it bakes in that huge cost irreversibly up front, so any future improvements will only make the existing plant less competitive. Wind and solar and geothermal and maybe even fusion will get cheaper over time, but a nuclear plant with most of its costs up front can’t. 70 years is a long time to commit to something.



  • Why has the world gotten both “more intelligent” and yet fundamentally more stupid at the same time? Serious question.

    Because it’s not actually always true that garbage in = garbage out. DeepMind’s Alpha Zero trained itself from a very bad chess player to significantly better than any human has ever been, by simply playing chess games against itself and updating its parameters for evaluating which chess positions were better than which. All the system needed was a rule set for chess, a way to define winners and losers and draws, and then a training procedure that optimized for winning rather than drawing, and drawing rather than losing if a win was no longer available.

    Face swaps and deep fakes in general relied on adversarial training as well, where they learned how to trick themselves, then how to detect those tricks, then improve on both ends.

    Some tech guys thought they could bring that adversarial dynamic for improving models to generative AI, where they could train on inputs and improve over those inputs. But the problem is that there isn’t a good definition of “good” or “bad” inputs, and so the feedback loop in this case poisons itself when it starts optimizing on criteria different from what humans would consider good or bad.

    So it’s less like other AI type technologies that came before, and more like how Netflix poisoned its own recommendation engine by producing its own content informed by that recommendation engine. When you can passively observe trends and connections you might be able to model those trends. But once you start actually feeding back into the data by producing shows and movies that you predict will do well, the feedback loop gets unpredictable and doesn’t actually work that well when you’re over-fitting the training data with new stuff your model thinks might be “good.”



  • The problem is that there are too many separate dimensions to define the tiers.

    In terms of data signaling speed and latency, you have the basic generations of USB 1.x, 2.0, 3.x, and 4, with Thunderbolt 3 essentially being the same thing as USB4, and Thunderbolt 4 adding on some more minimum requirements.

    On top of that, you have USB-PD, which is its own standard for power delivery, including how the devices conduct handshakes over a certified cable.

    And then you have the standards for not just raw data speed, but also what other modes are supported, for information to be seamlessly tunneled through the cable and connection in a mode that carries signals other than the data signal spec for USB. Most famously, there’s the DisplayPort Alt Mode for driving display data over a USB-C connection with a DP-compatible monitor. But there’s also an analog audio mode so that the cable and port passes along analog data to or from microphones or speakers.

    Each type of cable, too, carries different physical requirements, which also causes a challenge on how long the cable can be and still work properly. That’s why a lot of the cables that support the latest and greatest data and power standards tend to be short. A longer cable might be useful, but could come at the sacrifice of not supporting certain types of functions. I personally have a long cable that supports USB-PD but can’t carry thunderbolt data speeds or certain types of signals, but I like it because it’s good for plugging in a charger when I’m not that close to an outlet. But I also know it’s not a good cable for connecting my external SSD, which would be bottlenecked at USB 2.0 speeds.

    So the tiers themselves aren’t going to be well defined.


  • The only devices that don’t have at least Thunderbolt 3 on all ports do use the Thunderbolt logo on the ones that support it, except the short-lived 12-inch MacBook (non-Pro, non-Air). Basically, for data transfer:

    • If it’s a 12-inch MacBook, the single USB-C port doesn’t support Thunderbolt, and only supports USB 3.1 Gen 1.
    • In all other devices, if the ports are unmarked, they all support Thunderbolt 3 or higher
    • If the ports are marked with Thunderbolt symbols, those ports support Thunderbolt but the unmarked ports on the same computer don’t.

    For power delivery, every USB-C port in every Apple laptop supports at least first generation USB-PD.

    For display, every USB-C port in every Apple laptop (and maybe even the desktops) supports DisplayPort alt mode.

    It’s annoying but not actually that hard to remember in the wild.


  • Everything defined in the Thunderbolt 3 spec was incorporated into the USB 4 spec, so Thunderbolt 3 and USB 4 should be basically identical. In reality the two standards are enforced by different certification bodies, so some hardware manufacturers can’t really market their compliance with one or the other standard until they get that certification. Framework’s laptops dealt with that for a while, where they represented that their ports supported certain specs that were basically identical to the USB 4 spec or even the Thunderbolt 4 spec, but couldn’t say so until after units had already been shipping.



  • Apple does two things that are very expensive:

    1. They use a huge physical area of silicon for their high performance chips. The “Pro” line of M chips have a die size of around 280 square mm, the “Max” line is about 500 square mm, and the “Ultra” line is possibly more than 1000 square mm. This is incredibly expensive to manufacture and package.
    2. They pay top dollar to get the exclusive rights to TSMC’s new nodes. They lock up the first year or so of TSMC’s manufacturing capacity at any given node, at which point there is enough capacity to accommodate other designs from other TSMC clients (AMD, NVIDIA, Qualcomm, etc.). That means you can just go out and buy an Apple device made from TSMC’s latest node before AMD or Qualcomm have even announced the lines that will be using those nodes.

    Those are business decisions that others simply can’t afford to follow.


  • The biggest problem they are having is platform maturity

    Maybe that’s an explanation for desktop/laptop performance, but I look at the mobile SoC space where Apple holds a commanding lead over ARM chips from Qualcomm, and where Qualcomm has better performance and efficiency than Samsung’s Exynos line, and I’m thinking a huge chunk of the difference between manufacturers can’t simply be explained by ISA or platform maturity. Apple has clearly been prioritizing battery life and efficiency for 10+ generations of Apple Silicon in the mobile market, and has a lead independent of its ISA, even as it trickled over to the laptop and desktop market.