• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 27th, 2023

help-circle
  • Seems like there are a number of issues with this.

    1. Not defining “reliability challenge” in a meaningful way. (How many of these are problems that are expensive or time-consuming to repair? How expensive and how time-consuming? Are these problems that prevent the car from driving safely, or are they inconveniences that can be put off?)

    2. Not controlling for manufacturer. (Toyota has long-been regarded as a reliable manufacturer, but they make 2 plug-in hybrids and 1 EV, all of which are new this year. Meanwhile, they offer about a dozen different traditional hybrids. I can believe that the Tesla Model 3 is less reliable than the Toyota Camry, but is a full-electric Hyundai Ioniq less reliable than a Hyundai Sonata?)

    3. Including plug-in hybrids and full electric vehicles as one category. (Plug-in hybrids combine the old breakable parts such as transmissions with the new breakable parts such as lithium batteries. This is the trade-off that buyers make to get the efficiency of an electric vehicle at short ranges and the convenience of an ICE at long ranges.)








  • To my knowledge, Reddit is owned by private companies and investors. Blackrock and Vanguard have no ownership stake, or a very small and very indirect ownership stake.

    For what it’s worth, a significant percentage of every (reasonably liquid) public company on Earth is owned by Vanguard and Blackrock, because those companies manage trillions of dollars in assets (many of which are middle-class people’s retirement investments). They aren’t a conspiracy. They’re asset managers, and mostly passive managers at that.





  • There are a few reasons why music models haven’t exploded the way that large-language models and generative image models have. Maybe the strength of the copyright-holders is part of it, but I think that the technical issues are a bigger obstacle right now.

    • Generative models are extremely data-inefficient. The Internet is loaded with text and images, but there isn’t as much music.

    • Language and vision are the two problems that machine learning researchers have been obsessed with for decades. They built up “good” datasets for these problems and “good” benchmarks for models. They also did a lot of work on figuring out how to encode these types of data to make them easier for machine learning models. (I’m particularly thinking of all of the research done on word embeddings, which are still pivotal to large language models.)

    Even still, there are fairly impressive models for generative music.