• 1 Post
  • 12 Comments
Joined 7 months ago
cake
Cake day: May 2nd, 2024

help-circle




  • They let people believe that streaming is cheap, but it is not. A server can send streams to many people at the same time, but not so many as it seems and sever up time is a cost, in terms of energy and in terms of sysadmin time. Maintenance of the network is also expensive, especially in the US where most of the people live in low density neighbourhoods.

    To that you have to add the cost of the big data servers that check everything people look at and profile their customers.

    The dirty cheap subscriptions were meant to attract new customers, the service was heavily subsidized. The companies looked profitable just because other companies bought more ad space than necessary. Overadvertising is the preferred method to give stealth subsidies, but it is a cost for the other businesses of the network. After a while they have to shift those costs to the customers.




  • You assume that it will be either the self driving software in charge or the button pusher taking the wheel. You did not consider that the button pusher might have a foot on the brake, but instead of taking the wheel he might have to enter some commands.

    Like the case where there is a road block ahead and the button pusher has to evaluate whether it is safe to move forward or not, but he wouldn’t take the wheel he would tell to the driving software where to go. In similar cases he would have to decide whether it is safe to pass aside an obstacle or stop there. Even in case of a burglar trying to get on board he would have to call the police and then give some commands to the driving software.

    The idea at the base of the question is that in the future the AI or whatever you want to call it might be always in charge for the specialized functions, like calculating the right trajectory and turning the wheel, while the human will be in charge to check the surrounding environment and evaluate the situation. So the Ai is never supposed to be deactivated, in that case the truck would stop until the maintenance team arrives.


  • A serious self driving vehicle must be able to see around with different sensors. But then it must have a lot of computing power on board to merge different streams of data coming from different sensor. That adds up to the computing power required to make a proper prediction of the trajectories of dozen of other objects moving around the vehicle. I don’t know about the latest model, but I knew that the google cars few years ago had the boot occupied by big computers with several CUDA cards.

    That’s not something you can put in a commercial car sold to the public, what you get is a car that relies only on one camera to look around and has a sensor in the bumper that cuts the engine if activated, but it does not create an additional stream of data. Maybe that there is a second camera looking down at the line on the road, but the data stream is not merged to the other, it is used to adjust the driving commands. I don’t even know if the little onboard computer they have is able to computes the trajectories of all the objects around the car. Few sensors and little processing power, that is not enough, it is not a self driving car.

    When Tesla sells a car with driving assistance they tell to the customer that their car is not a self driving car, but they fail to explain why, where is the difference. How big is the gap. That’s one of the reasons why we had so many accidents.

    Similar post earlier.

    It starts from the same news, but taking the idea from the book in the link it asks something different.