Yeah right, and they are a big problem. I haven’t encountered a single V1 super charger in Europe after 4 years.
I have only ever seen one in the US and it was surrounded by V2 and V3.
Yeah right, and they are a big problem. I haven’t encountered a single V1 super charger in Europe after 4 years.
I have only ever seen one in the US and it was surrounded by V2 and V3.
You will not have that problem with Tesla though. All chargers are 150kW+.
So let’s say the code base leaks.
Let’s say our VPN was also compromised.
Then what is the worst that can happen? Some internal dev api with no real data in it can be tested by hackers.
No. For development purposes I want my devs to be able to clone the repo and start.
So the development config files are inside the repositories.
For local development you would definitely keep them in a config file. Nothing wrong with that.
For production they are set during the release process.
Nothing is more expensive than developers needing to find all the configs and keys to just start up a project to make a small fix somewhere.
Tesla super chargers are €0,36 per kWh
Just to add: they should not be chained, but they should run in parallel.
The car indeed has mobile data. A Tesla has a permanent 4g connection.
Huh?! If I look at the source of the article at Mozilla, Tesla is actually ranked as almost least creepy.
So I do not understand where this is coming from. Also the picture of the article only showing teslas is highly suggestive
https://foundation.mozilla.org/en/privacynotincluded/categories/cars/
You can configure nextjs to compile as only client-side-rendering, so that it runs like before!
Another thing: NextJS is not only SSR. It’s hybrid. The advantage here is that it decreases the visible first load time.
First load pre-rendered HTML and styling is sent to the browser. So the page is already fully visible. After that all scripts and secondary CSS will be loaded. And even after that the bindings will be done.
Where as with pure CSR, all JavaScripts need to be loaded and executed and only then stuff will become visible to the user
Hmm you’re right about autopilot mainly being used on highways and those roads are a lot safer. I’ll edit my main comment
Although it’s far from perfect, autopilot gets into a lot less accidents per mile than drivers without autopilot.
They have some statistics here: https://www.tesla.com/VehicleSafetyReport
EDIT: As pointed out by commenters in this thread, autopilot is mainly used on high ways, whereas the crash average is on all roads. Also Tesla only counts a crash if the airbag was deployed, but the numbers they compared against count every crash, including the ones without deployed airbags.
Just wow.
I bet you do not live in The Netherlands. We have a standardized process to complain against a fine.
If the picture doesn’t prove with certainty that you were holding a phone, complain to the address in the letter or just don’t pay the €359 fine and talk to a judge about it.
The fine contains a letter, a picture and payment information. If the person really wasn’t using their phone, they can file a complaint and the fine will be dismissed. Seems pretty simple to me.
However, I have not heard any complaints about it in the news and an embarrassing amount of fines has been given for this offense.
You’re totally right.
There is a manual door handle, which is not supposed to be used.
Most guests in my car naturally tend to go for the manual handle instead of the button, when not instructed.
So the people who claim to be locked are either looking for money or are total dumbfucks.
You’re right about that. The software is quite epic, compared to other EV manufacturers, like BMW.
The route planning for 1000+ km road trips is almost perfect.
The system works with AI signaling phone usage by driving.
Then a human will verify the photo.
AI is used to respect people’s privacy.
The combination of the AI detection+human review leads to a 5% false negative rate, and most probably 0% false positive.
This means that the AI missed at most 5% positives, but probably less because of the human reviewer not being 100% sure there was an offense.
Just to clarify the result: the article states that AI and human review leads to 95%.
Could also be that the human is flagging actual positives, found by the AI, as false positives.
I suspect they sent through a controlled set of cars where they tested all kinds of scenarios.
Other option would be to do a human review after installing it for a day.
Basically the whole movie Sausage Party. Great movie that is also fun for adults