• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 months ago

    I don’t think so. I’ve had success letting it write boilerplate code. And simple stuff that I could have copied from stack overflow. Or a beginners programming book. With every task from my real life it failed miserably. I’m not sure if I did anything wrong. And it’s been half a year since I last tried. Maybe things have changed substantially in the last few months. But I don’t think so.

    Last thing I tried was some hobby microcontroller code to do some robotics calculations. And ChatGPT didn’t really get what it was supposed to do. And additionally instead of doing the maths, it would just invent some library functions, call them with some input values and imagine the maths to be miraculously be done in the background, by that nonexistent library.

    • flamingo_pinyata@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 months ago

      Yes actually, I can imagine it getting microcontroller code wrong. My niche is general backend services. I’ve been using Github copilot a lot and it served me well for generating unit tests. Write test description and it pops out the code with ~ 80% accuracy

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 months ago

        Sure. There are lots of tedious tasks in a programmers life that don’t require a great amount of intelligence. I suppose writing some comments, docstrings, unit tests, “glue” and boilerplate code that connects things and probably several other things that now escape my mind are good tasks for an AI to assist a proper programmer and make them more effective and get things done faster.

        I just wouldn’t call that programming software. I think assisting with some narrow tasks is more exact.

        Maybe I should try doing some backend stuff. Or give it an API definition and see what it does 😅 Maybe I was a bit blinded by ChatGPT having read the Wikipedia and claiming it understands robotics concepts. But it really doesn’t seem to have any proper knowledge. Same probably applies to engineering and other nighboring fields that might need software.

        • flamingo_pinyata@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          It might also have to do with specialized vs general models. Copilot is good at generating code but ask it to write prose text and it fails completely. In contrast ChatGPT is awful at code but handles human readable text decently.