The false promises of Tesla’s Full Self-Driving - eviltoast
  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    5
    ·
    1 year ago

    I don’t know why people are so quick to defend the need of LIDAR when it’s clear the challenges in self driving are not with data acquisition.

    Sure, there are a few corner cases that it would perform better than visual cameras, but a new array of sensors won’t solve self driving. Similarly, the lack of LIDAR does not forbid self driving, otherwise we wouldn’t be able to drive either.

    • ShadowRam@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      edit-2
      1 year ago

      challenges in self driving are not with data acquisition.

      What?!?! Of course it is.

      We can already run all this shit through a simulator and it works great, but that’s because the computer knows the exact position, orientation, velocity of every object in a scene.

      In the real world, the underlying problem is the computer doesn’t know what’s around it, and what those things around doing or going to do.

      It’s 100% a data acquisition problem.

      Source? I do autonomous vehicle control for a living. In environments much more complicated than a paved road with accepted set rules.

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        1 year ago

        You’re confusing data acquisition with interpretation. A LIDAR won’t label the data for your AD system and won’t add much to an existing array of visible spectrum cameras.

        You say the underlying problem is that the computer doesn’t know what’s around it. But its surroundings are reliably captured by functional sensors. Therefore it’s not a matter of acquisition, but processing of the data.

        • ShadowRam@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          1 year ago

          won’t add much to an existing array of visible spectrum cameras.

          You do realize LIDAR is just a camera, but has an accurate distance per pixel right?

          It absolutely adds everything.

          But its surroundings are reliably captured by functional sensors

          No it’s not. That’s the point. LIDAR is the functional sensor required.

          You can not rely on stereoscopic camera’s.
          The resolution of distance is not there.
          It’s not there for humans.
          It’s not there for the simple reason of physics.

          Unless you spread those camera’s out to a width that’s impractical, and even then it STILL wouldn’t be as accurate as LIDAR.

          You are more then welcome to try it yourself.
          You can be even as stupid as Elon and dump money and rep into thinking that it’s easier or cheaper without LIDAR.

          It doesn’t work, and it’ll never work as good as a LIDAR system.
          Stereoscopic Camera’s will always be more expensive than LIDAR from a computational standpoint.

          AI will do a hell of a lot better recognizing things via a LIDAR Camera than a Stereoscopic Camera.

          • Eager Eagle@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            1 year ago

            This assumes depth information is required for self driving, I think this is where we disagree. Tesla is able to reconstruct its surroundings from visual data only. In biology, most animals don’t have explicit depth information and are still able to navigate in their environments. Requiring LIDAR is a crutch.

            • Geek_King@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              1 year ago

              I disagree with you, I don’t think visual camera’s alone are up to the task. There was an instance of a Tesla in auto pilot mode driving at night with the driver being drunk. This took place in Texas on the high way, the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing as a stationary obstacle. Instead it didn’t realize there was a car in the way around 1 second before the 55 mph impact, and it turned of autopilot that 1 second before.

              Having multiple layers of sensors, some being good at actually sensing a stationary obstacle, plus accurate range finding, plus visual analysis to pick out people and animal, thats the way to go.

              Visual range only cameras were just reported to have a harder time recognizing people of color and children.

              • Eager Eagle@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing

                If the obstacle was visible in the footage, the incident could have been avoided with visible spectrum cameras alone. Once again, a problem with the data processing, not acquisition.

                • Geek_King@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 year ago

                  If we’re talking about the safety of the driver and people around them, why not both types of sensors? LIDAR has things it excels at, and visual spectrum cameras have things they do well too. That way the data processing side has more things to rely on, instead of all the eggs in one basket.

                  • Eager Eagle@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    why not both types of sensors

                    Cost seems to be a pretty good reason. Admittedly, until I looked it up 5 minutes ago I thought it was just 100-200% more expensive than cameras, but it seems to be much more than that.

                    On top of that, there are the problems of weather and high energy usage. This is more of a problem than just “not working on rain”: if the autonomous driving system is designed to rely on data from a sensor that stops working when it rains, this can be worse than not having that sensor in the first place. This is what I refer to by saying that LIDAR is a crutch.

    • Dr. Dabbles@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes, self driving is not computationally solved at all. But the reason people defend LIDAR is that visible light cameras are very bad at depth estimation. Even with paralax, a lot of software has a very hard time accurately calculating distance and motion.