College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

      • Mugmoor@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        76
        ·
        1 year ago

        It’s always sucked for them, and it always will. That’s why we make accommodations for them, like extra time or a smaller/move private exam hall.

        • Instigate@aussie.zone
          link
          fedilink
          English
          arrow-up
          19
          ·
          1 year ago

          And readers/scribes! I’ve read and scribed for a friend who had dyslexia in one of her exams and it worked really well. She finished the exam with time to spare and got a distinction in the subject!

          • Tavarin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            13
            ·
            1 year ago

            Yep, my girlfriend acted as a scribe for disabled students at a university. She loved it, and the students were able to complete their written work and courses just fine as a result.

      • Naia@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        23
        ·
        1 year ago

        My handwriting has always been terrible. It was a big issue in school until I was able to turn in printed assignments.

        Like with a lot of school things, they do a shit thing without thinking about negative effects. They always want a simple solution to a complex problem.

      • Tavarin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I did my undergrad 2008-2012, we had zero online exams. Every exam was in person and hand written. People with disabilities were accommodated, usually with extra writing time for those that need it, or a separate room with a writer for you to narrate to.

        It’s really not a terrible issue, and something universities have been able to deal with for centuries.

        • Matt Shatt@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Mine was even a bit before that and had a similar experience. However we were able to type up reports and essays which was great. My handwriting isn’t very good and I’m much faster at typing.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Handwriting an essay means I’m giving 90% of my energy and time to drawing ugly squiggles and 10% to making a sensible argument. If I’m allowed to use a computer, it’s 99% sensible argument and 1% typing. Surely this will not have any impact on the quality of the text the teachers have to read…

      • ratskrad@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I agree, I think a good compromise like school owned, locked down devices would still achieve the same thing

  • HexesofVexes@lemmy.world
    link
    fedilink
    English
    arrow-up
    134
    ·
    1 year ago

    Prof here - take a look at it from our side.

    Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

    I am not arguing exams are perfect mind, but I’d rather doubt a few student’s inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).

    Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.

    • maegul@lemmy.ml
      link
      fedilink
      English
      arrow-up
      30
      ·
      1 year ago

      Here’s a somewhat tangential counter, which I think some of the other replies are trying to touch on … why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?

      In a world where something like AI can come up and change the landscape in a matter of a year or two … how much value is left in the idea of assessing people’s value through exams (and to be clear, I’m saying this as someone who’s done very well in exams in the past)?

      This isn’t to say that knowing things is bad or making sure people meet standards is bad etc. But rather, to question whether exams are fit for purpose as means of measuring what matters in a world where what’s relevant, valuable or even accurate can change pretty quickly compared to the timelines of ones life or education. Not long ago we were told that we won’t have calculators with us everywhere, and now we could have calculators embedded in our ears if wanted to. Analogously, learning and examination is probably being premised on the notion that we won’t be able to look things up all the time … when, as current AI, amongst other things, suggests, that won’t be true either.

      An exam assessment structure naturally leans toward memorisation and being drilled in a relatively narrow band of problem solving techniques,1 which are, IME, often crammed prior to the exam and often forgotten quite severely pretty soon afterward. So even presuming that things that students know during the exam are valuable, it is questionable whether the measurement of value provided by the exam is actually valuable. And once the value of that information is brought into question … you have to ask … what are we doing here?

      Which isn’t to say that there’s no value created in doing coursework and cramming for exams. Instead, given that a computer can now so easily augment our ability to do this assessment, you have to ask what education is for and whether it can become something better than what it is given what are supposed to be the generally lofty goals of education.

      In reality, I suspect (as many others do) that the core value of the assessment system is to simply provide a filter. It’s not so much what you’re being assessed on as much as your ability to pass the assessment that matters, in order to filter for a base level of ability for whatever professional activity the degree will lead to. Maybe there are better ways of doing this that aren’t so masked by other somewhat disingenuous goals?

      Beyond that there’s a raft of things the education system could emphasise more than exam based assessment. Long form problem solving and learning. Understanding things or concepts as deeply as possible and creatively exploring the problem space and its applications. Actually learn the actual scientific method in practice. Core and deep concepts, both in theory and application, rather than specific facts. Breadth over depth, in general. Actual civics and knowledge required to be a functioning member of the electorate.

      All of which are hard to assess, of course, which is really the main point of pushing back against your comment … maybe we’re approaching the point where the cost-benefit equation for practicable assessment is being tipped.


      1. In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams … which I think tells you pretty clearly what kind of task an exam actually is … a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.
      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        69
        ·
        1 year ago

        Ah the calculator fallacy; hello my old friend.

        So, a calculator is a great shortcut, but it’s useless for most mathematics (i.e. proof!). A lot of people assume that having a calculator means they do not need to learn mathematics - a lot of people are dead wrong!

        In terms of exams being about memory, I run mine open book (i.e. students can take pre-prepped notes in). Did you know, some students still cram and forget right after the exams? Do you know, they forget even faster for courseworks?

        Your argument is a good one, but let’s take it further - let’s rebuild education towards an employer centric training system, focusing on the use of digital tools alone. It works well, productivity skyrockets, for a few years, but the humanities die out, pure mathematics (which helped create AI) dies off, so does theoretical physics/chemistry/biology. Suddenly, innovation slows down, and you end up with stagnation.

        Rather than moving us forward, such a system would lock us into place and likely create out of date workers.

        At the end of the day, AI is a great tool, but so is a hammer and (like AI today), it was a good tool for solving many of the problems of its time. However, I wouldn’t want to only learn how to use a hammer, otherwise how would I be replying to you right now?!?

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          So … I honestly think this is a problematic reply … I think you’re being defensive (and consequently maybe illogical), and, honestly, that would be the red flag I’d look for to indicate that there’s something rotten in academia. Otherwise, there might be a bit of a disconnect here … thoughts:

          • The calculator was in reference to arithmetic and other basic operations and calculations using them … not higher level (or actual) mathematics. I think that was pretty clear and I don’t think there’s any “fallacy” here, like at all.
          • The value of learning (actual) mathematics is pretty obvious I’d say … and was pretty much stated in my post about alternatives to emphasise. On which, getting back to my essential point … how would one best learn and be assessed on their ability to construct proofs in mathematics? Are timed open book exams (and studying in preparation for them) really the best we’ve got!?
          • Still forgetting with open book exams … seems like an obvious outcome as the in-exam materials de-emphasise memory … they probably never knew the things you claim they forget in the first place. Why, because the exam only requires the students to be able to regurgitate in the exam, which is the essential problem, and for which in-exam materials are a perfect assistant. Really not sure what the relevance of this point is.
          • Forgetting after coursework … how do you know this (genuinely curious)? Even so, course work isn’t the great opposite to exams. Under the time crunch of university, they are also often crammed, just not in an examination hall. The alternative forms of education/assessment I’m talking about are much more long-form and exploration and depth focused. The most I’ve ever remembered from a single semester subject came from when I was allowed to pursue a single project for the whole subject. Also, I didn’t mention ordinary general coursework in my post, as, again, it’s pretty much the same paradigm of education as exams, just done at home for the most part.
          • Rebuilding education toward employer centric training system … I … ummm … never suggested this … I suggested the opposite … only things that were far more “academic” than this and were never geared toward “productivity”. This is a pretty bad staw man argument for a professor to be making, especially given that it seems constructed to conclude that the academy and higher learning are essential for the future success of the economy (which I don’t disagree with or even question in my post).
          • You speak about AI a lot. Maybe your whole reply was solely to the whole calculator point I made. This, I think, misses the broader point, which most of my post was dedicated to. That is, this isn’t about us now needing to use AI in education (I personally don’t buy that at all for probably much of the same reason you’d push back on it). Instead, it’s about what it means about our education system that AI can kinda do the thing we’re using to assess ourselves … on which I say, it tells us that the value of assessment system we take pretty seriously ought to be questioned, especially, as I think we both agree on, given the many incredibly valuable things education has to offer the individual and society at large. In my case, I go further and say that the assessment system is and has already detracted from these potential offerings, and that it does not bode well for modern western society that it seems to be leaning into the assessment system rather than expanding its scope.
            • maegul@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Ha … well if I had answers I probably wouldn’t be here! But seriously, I do think this is a tough topic with lots of tangled threads linked to how our society functions. I’m not sure there are any easy “fixes”, I don’t think anyone who thinks that can really be trusted, and it may very well turn out that I’m completely wrong and there is not “better way”, as something flawed and problematic may just turn out to be what humanity needs.

              A pretty minor example based on the whole thing of returning to paper exams. What happens when you start forcing students to be judged on their ability to do something, alone, where they know very well that they can do better with an AI assistant? Like at a psychological and cultural level? I don’t know, I’m not sure my generation (Xennial) or earlier ever had that. Even with calculators and arithmetic, it was always about laziness or dealing with big numbers that were impossible for (normal humans), or ensuring accuracy. It may not be the case that AI is at that level yet for many exams and students (I really don’t know), but it might be or might be soon. However valuable it is to force students to learn to do the task without the AI, there’s gotta be some broad cultural effect in just ignoring the super useful machine.

              Otherwise, my general ideas would be to emphasise longer form work (which AI is not terribly useful for). Work that requires creativity, thinking, planning, coherent understanding, human-to-human communication and collaboration. So research projects, actual practical work, debates, teaching as a form of assessment etc. In many ways, the idea of “having learned something” becomes just a baseline expectation. Exams, for instance, may still hold lots of value, but not as forms of objective assessment, but as a way of calibrating where you’re up to on the basic requirements to start the real “assessment” and what you still need to work on.

              Also … OK Mr Socrates … is maybe not the most polite way of engaging here … comes off as somewhat aggressive TBH.

        • Spzi@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          let’s rebuild education towards an employer centric training system, focusing on the use of digital tools alone. It works well, productivity skyrockets, for a few years, but the humanities die out, pure mathematics (which helped create AI) dies off, so does theoretical physics/chemistry/biology. Suddenly, innovation slows down, and you end up with stagnation.

          Rather than moving us forward, such a system would lock us into place and likely create out of date workers.

          I found this too generalizing. Yes, most people only ever need and use productivity skills in their worklife. They do no fundamental research. Wether their education was this or that way has no effect on the advancement of science in general, because these people don’t do science in their career.

          Different people with different goals will do science, and for them an appropriate education makes sense. It also makes sense to have everything in between.

          I don’t see how it helps the humanities and other sciences to teach skills which are never used. Or how it helps to teach a practice which no one applies in practice. How is it a threat to education when someone uses a new tool intelligently, so they can pass academic education exams? How does that make them any less valuable for working in that field? Assuming the exam reflects what working in that field actually requires.

          I think we can also spin an argument in the opposite direction: More automation in education frees the students to explore side ideas, to actually study the field.

          • HexesofVexes@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 year ago

            “I don’t see how it helps the humanities and other sciences to teach skills which are never used.” - I can offer an unusual counter here, you’re assuming the knowledge will never be used, or that we should avoid teaching things that are unlikely to be used. Were this the case, the field of graph theory would have ceased to exist long before it became useful in mapping - indeed Bool’s algebra would never have led to the foundations of computer science and the machines we are using today.

            “How is it a threat to education when someone uses a new tool intelligently, so they can pass academic education exams?” - Allow me to offer you the choice of two doctors, one of whome passed using AI, and the other passed a more traditional assessment. Which doctor would you choose and why? Surely the latter, since they would have also passed with AI, but the one without AI might not have passed the more traditional route due to a lack of knowledge. It isn’t a threat to education, it’s adding further uncertainty as to the outcome of such an education (both doctors might have the same skill levels, but there is more room for doubt in the first case).

            “Wether their education was this or that way has no effect on the advancement of science in general, because these people don’t do science in their career.” - I strongly disagree! In an environment where knowledge for the sake of knowledge is not prised, a lie is more easy plant and nurture (take for example the antivax movement). Such people can be an active hinderence to the progress of knowledge - their misconceptions creating false leads and creating an environment that distrusts such sciences (we’re predisposed to distrust what we do not understand).

            • Spzi@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              you’re assuming the knowledge will never be used, or that we should avoid teaching things that are unlikely to be used.

              Not exactly. What I meant to say is: Some students will never use some of the knowledge they were taught. In the age of information explosion, there is practically unlimited knowledge ‘available’. What part of this knowledge should be taught to students? For each bit of knowledge, we can make your hypothetic argument: It might become useful in the future, an entire important branch of science might be built on top of it.

              So this on it’s own is not an argument. We need to argue why this particular skill or knowledge deserves the attention and focus to be studied. There is not enough time to teach everything. Which in turn can be used as an argument to more computer assisted learning and teaching. For example, I found ChatGPT useful to explore topics. I would not have used it to cheat in exams, but probably to prepare for them.

              the choice of two doctors, one of whome passed using AI, and the other passed a more traditional assessment. Which doctor would you choose and why? Surely the latter, since they would have also passed with AI, but the one without AI might not have passed the more traditional route due to a lack of knowledge.

              Good point, but it depends on context. You assume the traditional doc would have passed with AI, but that is questionable. These are complex tools with often counterintuitive behaviour. They need to be studied and approached critically to be used well. For example, the traditional doc might not have spotted the AI hallucinating, because she wasn’t aware of that possibility.

              Further, it depends on their work environment. Do they treat patients with, or without AI? If the doc is integrated in a team of both human and artificial colleagues, I certainly would prefer the doc who practiced these working conditions, who proved in exams they can deliver the expected results this way.

              In an environment where knowledge for the sake of knowledge is not prised

              I feel we left these lands in Europe when diplomas were abandoned for the bachelor/master system, 20 years ago. Academic education is streamlined, tailored to the needs of the industry. You can take a scientific route, but most students don’t. The academica which you describe as if it was threatened by something new might exist, but it lives along a more functional academia where people learn things to apply them in our current reality.

              It’s quite a hot take to paint things like the antivax movement on academic education. For example, I question wether the people proposing and falling for these ‘ideas’ are academics in the first place.

              Personally, I like learning knowledge for the sake of knowledge. But I need time and freedom to do so. When I was studying computer science with an overloaded schedule, my interest in toying with ideas and diving into backgrounds was extremely limited. I also was expected to finish in an unreasonably short amount of time. If I could have sped up some of the more tedious parts of the studies with the help of AI, this could have freed up resources and interest for the sake of knowledge.

              • pinkdrunkenelephants@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 year ago

                You use literally everything you learn; it shapes your worldview and influences everything you do, especially how you vote. Don’t tell us that useless knowledge exists. It all has inherent worth.

                • Spzi@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Yes, within limits. Due to the information explosion, it became impossible to learn “everything”. We need to make choices, prioritize.

                  How does your voting behaviour suffer because you lack understanding about how exactly potentiometers work, or how to express historic events in modern dance?

                  Both have inherent worth, but not the same for each person and context. We luckily live in a society of labor division. Not everyone has to know or like everything. While I absolutely admire science, not everyone has to be a scientist.

                  Because there is more knowledge available than we can ever teach a single person, it is entirely possible to spend a lifetime learning things with no use informing your ballot decision. I would much rather have students optimize some parts of their education with AI, to free up capacity for other important subjects which may seem less related to their discipline. For example, many of my fellow computer science students were completely unaware how it could be ethically questionable to develop pathfinding algorithms for military helicopters.

      • CapeWearingAeroplane@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        I think a central point you’re overlooking is that we have to be able to assess people along the way. Once you get to a certain point in your education you should be able to solve problems that an AI can’t. However, before you get there, we need some way to assess you in solving problems that an AI currently can. That doesn’t mean that what you are assessed on is obsolete. We are testing to see if you have acquired the prerequisites for learning to do the things an AI can’t do.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago
          1. AI isn’t as important to this conversation as I seem to have implied. The issue is us, ie humans, and what value we can and should seek from our education. What AI can or can’t do, IMO, only affects vocational aspects in terms of what sorts of things people are going to do “on the job”, and, the broad point I was making in the previous post, which is that AI being able to do well at something we use for assessment is an opportunity or prompt to reassess the value of that form of assessment.
          2. Whether AI can do something or not, I call into question the value of exams as a form of assessment, not assessment itself. There are plenty of other things that could be used for assessment or grading someone’s understanding and achievement.
          3. The real bottom line on this is cost and that we’re a metric driven society. Exams are cheap to run and provide clean numbers. Any more substantial form of assessment, however much they better target more valuable skills or understanding, would be harder to run. But again, I call into question how valuable all of what we’re doing actually is compared to what we could be doing, however more expensive and whether we should really try to focus more on what we humans are good at (and even enjoy).
          • pinkdrunkenelephants@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            AI can’t do jack shit with any meaningful accuracy anyway so it’s stupid to compare human education to AI blatantly making shit up like it always does

      • ZzyzxRoad@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        Here’s a somewhat tangential counter, which I think some of the other replies are trying to touch on … why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?

        My theory prof said there would be paper exams next year. Because it’s theory. You need to be able to read an academic paper and know what theoretical basis the authors had for their hypothesis. I’m in liberal arts/humanities. Yes we still exist, and we are the ones that AI can’t replace. If the whole idea is that it pulls from information that’s already available, and a researcher’s job is to develop new theories and ideas and do survey or interview research, then we need humans for that. If I’m trying to become a professor/researcher, using AI to write my theory papers is not doing me or my future students any favors. Ststistical research on the other hand, they already use programs for that and use existing data, so idk. But even then, any AI statistical analysis should be testing a new hypothesis that humans came up with, or a new angle on an existing one.

        So idk how this would affect engineering or tech majors. But for students trying to be psychologists, anthropologists, social workers, professors, then using it for written exams just isn’t going to do them any favors.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I also used to be a humanities person. The exam based assessments were IMO the worst. All the subjects assessed without any exams were by far the best. This was before AI BTW.

          If you’re studying theoretical humanities type stuff, why can’t your subjects be assessed without exams? That is, by longer form research projects or essays?

      • dragonflyteaparty@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        As they are talking about writing essays, I would argue the importance of being able to do it lies in being able to analyze a book/article/whatever, make an argument, and defend it. Being able to read and think critically about the subject would also be very important.

        Sure, rote memorization isn’t great, but neither is having to look something up every single time you ever need it because you forgot. There are also many industries in which people do need a large information base as close recall. Learning to do that much later in life sounds very difficult. I’m not saying people should memorize everything, but not having very many facts about that world around you at basic recall doesn’t sound good either.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Learning to do that much later in life sounds very difficult

          That’s an interesting point I probably take for granted.

          Nonetheless, exercising memory is probably something that could be done in a more direct fashion, and therefore probably better, without that concern affecting the way we approach all other forms of education.

      • tony@lemmy.hoyle.me.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        It’s an interesting point… I do agree memorisation is (and always has been) used as more of a substitute for actual skills. It’s always been a bugbear of mine that people aren’t taught to problem solve, just regurgitate facts, when facts are literally at our fingertips 24/7.

            • SpiderShoeCult@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              While I do agree with your initial point (that memorization is not really the way to go with education, I’ve hated it for all my life because it was never a true filter - a parrot could pass university level tests if trained well enough), I will answer your first point there and say that yes, it is important to know where Yugoslavia was, because politics was always first and foremost influenced by geography, and not just recent.

              Without discussing the event mentioned itself, some points to consider:

              1. The cultural distribution of people - influenced by geography - people on the same side of the mountain or river are more likely to share the same culture for example. Also were there places easily. Were they lands easily accessible to conquering armies and full of resources? Have some genocide and replacement with colonizers from the empire - and the pockets of ‘natives’ left start harboring animosity towards the new people.

              2. Spheres of influence throughout history - arguably the most important factor - that area of Europe has usually been hammered by its more powerful neighbours, with nations not posessing adequate diplomacy or tactics being absorbed or into or heavily influenced by whatever empire was strongest at the time - Ottoman Empire, USSR, Roman Empire if we want to go that far into history. So I would say hearing ‘Yugoslavia was in South East Europe’ would immediately prompt an almost instinctual question of ‘Oh, what terrible things happened there throughout history, then?’ for one familiar with that area, thereby raising this little tidbit to one of the top facts.

              We could then raise the question of what would have happened to the people had they been somewhere else? History is written by the victors and the nasty bits (like sabotage and propaganda to prevent a certain geographically nation from becoming too powerful) are left out.

              My geopolitics game isn’t that strong but I’m going to go out on a limb here and say that if the Swiss weren’t in the place they are, they would probably not be the way they are (no negative nuance intended). Living in a place that’s hard to invade tends to shape people differently than constantly looking over your shoulder.

              And reading your second point, I’m understanding about what I wrote in this wall of text. Odd.

              • maegul@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                And reading your second point, I’m understanding about what I wrote in this wall of text. Odd.

                Yea … we’re on the same page here (I think). All the things you’re talking about are the important stuff, IMO. “Yugoslavia is in south eastern Europe” doesn’t mean much, even if you can guess something about the relatively obvious implications of that geography, as you say. But those implications come from somewhere, some understanding of some other episode of history. Or it could come form learning about Yugoslavia’s and the Balkan’s history. For instance, you might note from the location this it’s relatively close to Turkey, but that wouldn’t lead you to naturally expect a sizeable Islamic population in the region (well I didn’t at first), unless you really knew the Ottoman history too. So there’s a whole story to learn there of the particular cultural make up of the place and where it comes from and how that leads to cultural tensions come the Yugoslavian wars. In learning about that, you can learn about how far away the Ottoman empire was and where its borders got to over time, where the USSR was and the general ambit of Slavic culture etc. Once you’ve a got a story to tell, those things become naturally important and memorable.

                And now I’ve added my own wall of text … sorry. So … yes! I agree! Both of our walls of texts are (loosely) about the important stuff, with facts sure, but motivated by and situated in history (though there’s obviously a fuzzy line there too!)

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Yea, it isn’t even a new problem. The exam was questionable before AI.

      • Spike@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams … which I think tells you pretty clearly what kind of task an exam actually is … a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.

        You are getting some flak, but imho you are right. The only thing an exam really tests is how well you do in exams. Of course, educators dont want to hear that. But if you take a deep dive into (scientific) literature on the topic, the question “What are we actually measuring here?” is raised rightfully so.

        • maegul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Getting flak on social media, through downvotes, can often (though not always!) be a good thing … means you’re touching a nerve or something.

          On this point, I don’t think I’ve got any particularly valuable or novel insights, or even any good solutions … I’m mostly looking for a decent conversation around this issue. Unfortunately, I suspect, when you get everyone to work hard on something and give them prestigious certifications for succeeding at that something, and then do this for generations, it can be pretty hard to convince people to not assign some of their self-worth to the quality/value/meaning of that something and to then dismiss it as less valuable than previously thought. Possibly a factor in this conversation, which I say with empathy.


          Any links to some literature?

          • Spike@feddit.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Used only papers in german so far, sadly.

            Here is something I found interesting in english:

            Testing the test: Are exams measuring understanding? Brian K. Sato, Cynthia F. C. Hill, S. Lo Biochemistry and Molecular Biology Education

            in general: elicit.org

            really good site.

            • maegul@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Hadn’t heard of that elicit cite … thanks! How have you found it? It makes sense that it exists already, but I hadn’t really thought about it (haven’t looked up papers recently but may soon).

              Also thanks for the paper!!

              • Spike@feddit.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Have found it relatively early after it was created, using it for getting a quick overview over papers when writing my own. It is sooo good for that.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        In my experience, they love to give exams where it doesn’t matter what notes you bring, you’re on the same level whether you write down only the essential equations, or you copy down the whole textbook.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        So, we’re working on a study for online Vs in person.

        We’ve noticed that stiict time limits alone tend to shift grades. A locked down browser sounds great, but anyone can search using their phone, so proctoring is a must (but also time consuming to check) if you want to get the intended effect.

        As for online grading, it’s a mixed bag. With a very strict rubric, gradescope can save a lot of time, but otherwise it takes a lot longer. MCQs and single number answers can be auto-matked, but they’re awful at assessing ability and should be avoided. Overall, grading online costs more than it saves, and tends to give much more rigid feedback to students.

          • HexesofVexes@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            You could indeed go for such a setup, however, in a room with 50+ students it becomes very hard to angle a camera with a clear view on all of them, their computer screens, and under their desks. It’s easier just to walk around the room to invigilate. However, I might have a read up on this as it might be an option for students with exam anxiety (I realise we look scary walking around the exam room!).

    • Phoebe@feddit.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Sorry but it was never about OUR abilility in the firts place.

      In my country exams are old, outdated and often way to hard. In my country all classes are outdated and way to hard. It often feels that we are stucked in the middle of the 20th century.

      You have no change when you have a disability. When you have kids, parents to take care of. Or hell: you have to work, cause you can’t effort university otherwise.

      So i can totaly understand why students feel the need to use AI to survive that torture. I don’t feel sorry for an outdated university system.

      When it is about OUR abilility, then create a System that is for students and their needs.

      • pinkdrunkenelephants@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        If you can use AI to do the homework for you, we can use AI to do the job instead of you

        For the love of god, think beyond yourself for one damn minute

        • jarfil@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Precisely. Homework and tests that can be solved by an AI, are useless, nobody will hire you to do any of it when they can just plug in an AI.

    • MNByChoice@midwest.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students

      I get that this is a quick post on social media and only an antidote, but that is interesting. What do you think the connection is? AI, anxiety, or something else?

      • Kage520@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        That sounds like AI. If you do your homework then even sitting in a regular exam you should score better than 20%. This exam being open book, it sounds like they were unfamiliar with the textbook and could not find answers fast enough.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        It’s a tough one because I cannot say with 100% certainty that AI is the issue. Anxiety is definitely a possibility in some cases, but not all; perhaps thinking time might be a factor, or even just good old copying and then running the work through a paraphraser. The large amount of absenses also means it was hard to benchmark those students based on class assessment (yes, we are always tracking how you are doing in class, not tp judge you, but just in case you need some extra help!).

        However, AI is a strong contender since the “open book” part didn’t include the textbook, it allowed the students to take a booklet into the exams with their own notes (including fully worked examples). They scored low because they didn’t understand their own notes, and after reviewing the notes they brought in (all word perfect), it was clear they did not understand the subject.

        • joel_feila@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Oh an open notes test. Man i never use my notes on those. I try not to use the book on open tests.

          • HexesofVexes@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Curious to know your take on why you avoid using the notes - a couple of my students clearly did this in the final and insights onto why would be welcome!

      • adavis@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        Not the previous poster. I taught an introduction to programming unit for a few semesters. The unit was almost entirely portfolio based ie all done in class or at home.

        The unit had two litmus tests under exam like conditions, on paper in class. We’re talking the week 10 test had complexity equal to week 5 or 6. Approximately 15-20% of the cohort failed this test, which if they were up to date with class work effectively proved they cheated. They’d be submitting course work of little 2d games then on paper be unable to “with a loop, print all the odd numbers from 1 to 20”

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Is AI going to go away?

      In the real world, will those students be working from a textbook, or from a browser with some form of AI accessible in a few years?

      What exactly is being measured and evaluated? Or has the world changed, and existing infrastructure is struggling to cling to the status quo?

      Were those years of students being forced to learn cursive in the age of the computer a useful application of their time? Or math classes where a calculator wasn’t allowed?

      I can hardly think just how useful a programming class where you need to write it on a blank page of paper with a pen and no linters might be, then.

      Maybe the focus on where and how knowledge is applied needs to be revisited in light of a changing landscape.

      For example, how much more practically useful might test questions be that provide a hallucinated wrong answer from ChatGPT and then task the students to identify what was wrong? Or provide them a cross discipline question that expects ChatGPT usage yet would remain challenging because of the scope or nuance?

      I get that it’s difficult to adjust to something that’s changed everything in the field within months.

      But it’s quite likely a fair bit of how education has been done for the past 20 years in the digital age (itself a gradual transition to the Internet existing) needs major reworking to adapt to changes rather than simply oppose them, putting academia in a bubble further and further detached from real world feasibility.

      • SkiDude@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        1 year ago

        If you’re going to take a class to learn how to do X, but never actually learn how to do X because you’re letting a machine do all the work, why even take the class?

        In the real world, even if you’re using all the newest, cutting edge stuff, you still need to understand the concepts behind what you’re doing. You still have to know what to put into the tool and that what you get out is something that works.

        If the tool, AI, whatever, is smart enough to accomplish the task without you actually knowing anything, what the hell are you useful for?

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          why even take the class?

          To have a piece of paper to get a job.

          For CS this is nothing new we have been dealing with graduates who can’t program, and self-taught geniuses, since before the AI boom so paper credentials just aren’t as important.

          • pinkdrunkenelephants@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            You don’t need a piece of paper to get a decent job. People go to college to get into fields they personally care about. If all you care about is money, go work sales.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Companies love paper certificates because that means they can outsource judging applicant qualifications. How bad that is differs from company to company and field to field.

              I mean if you’re a small non-IT company, say a machine shop, and you’re looking for a devops and you have no IT department and noone understands anything about what that person is going to do but “is going to run our website and sales platform”… what else are you going to rely on but paper certificates? Hire a consultancy?

              • pinkdrunkenelephants@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Most companies don’t actually require degrees to get decent paying work. Even s lot of IT work is hired more by documented experience than having a degree. Having a degree alone simply doesn’t cut it in that field; you have to actually prove you can do actual things which a degree can’t really do anymore. Degrees are for academic fields.

                Source: Went to college, then switched to sales which required no outside education, learned and earned a lot more.

        • prosp3kt@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          But that’s actually most of the works we have nowadays. IA is replacing repetitive works such as magazine writers or script writers

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                And junior programmers

                …no. Juniors are hard enough to mentor to write sensible code in the first place adding AI to that is only making things worse.

                The long-term impacts on AI past what’s already happening (and having an actual positive impact on the products and companies, that is, discount that Hollywood scriptwriting stuff) will be in industrial automation and logistics/transportation. Production lines that can QC on their own as well as a whole army of truck and taxi drivers. AI systems will augment fields such as medicine, but not replace actual doctors. Think providing alternative diagnosis possibilities and recommending suitable tests to be sure kind of stuff, combatting routine tunnel vision by, precisely, being less adaptive than human doctors.

              • pinkdrunkenelephants@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Any platform that does that is going to collapse. Not enough people will watch AI generated garbage for it to be viable, and those that don’t will simply split off the internet and entertainment, shrinking and splitting the economy.

              • ZeroHora@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I understand that they’ll be replaced, or at least the producers want thant, but I don’t think that’s because of repetitive work, more like they need a lot of them.

      • orangeboats@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        1 year ago

        As an anecdotal though, I once saw someone simply forwarding (ie. copy and pasting) their exam questions to ChatGPT. His answers are just ChatGPT responses, but paraphrased to make it look less GPT-ish. I am not even sure whether he understood the question itself.

        In this case, the only skill that is tested… is English paraphrasing.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        I’ll field this because it does raise some good points:

        It all boils down to how much you trust what is essentially matrix multiplication, trained on the internet, with some very arbitrarily chosen initial conditions. Early on when AI started cropping up in the news, I tested the validity of answers given:

        1. For topics aimed at 10–18 year olds, it does pretty well. It’s answers are generic, and it makes mistakes every now and then.

        2. For 1st–3rd year degree, it really starts to make dangerous errors, but it’s a good tool to summarise materials from textbooks.

        3. Masters+, it spews (very convincing) bollocks most of the time.

        Recognising the mistakes in (1) requires checking it against the course notes, something most students manage. Recognising the mistakes in (2) is often something a stronger student can manage, but not a weaker one. As for (3), you are going to need to be an expert to recognise the mistakes (it literally misinterpreted my own work at me at one point).

        The irony is, education in its current format is already working with AI, it’s teaching people how to correct the errors given. Theming assessment around an AI is a great idea, until you have to create one (the very fact it is moving fast means that everything you teach about it ends up out of date by the time a student needs it for work).

        However, I do agree that education as a whole needs overhauling. How to do this: maybe fund it a bit better so we’re able to hire folks to help develop better courses - at the moment every “great course” you’ve ever taken was paid for in blood (i.e. 50 hour weeks teaching/marking/prepping/meeting arbitrary research requirement).

        • Armok: God of Blood@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          (1) seems to be a legitimate problem. (2) is just filtering the stronger students from the weaker ones with extra steps. (3) isn’t an issue unless a professor teaching graduate classes can’t tell BS from truth in their own field. If that’s the case, I’d call the professor’s lack of knowledge a larger issue than the student’s.

          • jarfil@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            You may not know this, but “Masters” is about uncovering knowledge nobody had before, not even the professor. That’s where peer reviews and shit like LK-99 happen.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              It really isn’t. You don’t start doing properly original research until a year or two into a PhD. At best a masters project is going to be doing something like taking an existing model and applying it to an adjacent topic to the one it was designed for.

        • zephyreks@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          On the other hand, what if the problem is simply one that’s no longer important for most people? Isn’t technological advancement supposed to introduce abstraction that people can develop on?

          • average650@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            The point is the students can’t get to the higher level concepts if they’re just regurgitating from what chatgpt says.

          • MBM@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            If you never learn how to do the basics without ChatGPT, it’s a big jump to figure out the advanced topics where ChatGPT no longer helps you

      • pinkdrunkenelephants@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Textbooks, like on physical paper, are never going to just go away. They offer way too many advantages over even reading digital books.

    • Spike@feddit.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

      Could you ever though, when giving them work they had to do not in your physical presence? People had their friends, parents or ghostwriters do the work for them all the time. You should know that.

      This is not an AI problem, AI “just” made it far more widespread and easier to access.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        “Sometimes” would be my answer. I caught students who colluded during online exams, and even managed to spot students pasting directly from an online search. Those were painful conversations, but I offered them resits and they were all honest and passed with some extra classes.

        With AI, detection is impossible at the moment.

    • mrspaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I recently finished my degree, and exam-heavy courses were the bane of my existence. I could sit down with the homework, work out every problem completely with everything documented, and then sit to an exam and suddenly it’s “what’s a fluid? What’s energy? Is this a pencil?”

      The worst example was a course with three exams worth 30% of the grade, attendance 5% and homework 5%. I had to take the course twice; 100% on HW each time, but barely scraped by with a 70.4% after exams on the second attempt. Courses like that took years off my life in stress. :(

    • Smacks@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Graduated a year ago, just before this AI craze was a thing.

      I feel there’s a social shift when it comes to education these days. It’s mostly: “do 500 - 1,000 word essay to get 1.5% of your grade”. The education doesn’t matter anymore, the grades do; if you pick something up along the way, great! But it isn’t that much of a priority.

      I think it partially comes from colleges squeezing students of their funds, and indifferent professors who just assign busywork for the sake of it. There are a lot of uncaring professors that just throw tons of work at students, turning them back to the textbook whenever they ask questions.

      However, I don’t doubt a good chunk of students use AI on their work to just get it out of the way. That really sucks and I feel bad for the professors that actually care and put effort into their classes. But, I also feel the majority does it in response to the monotonous grind that a lot of other professors give them.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        “Avoid at all costs because we hate marking it even more than you hate writing it”?

        An in person exam can be done in a locked down IT lab, and this leads to a better marking experience, and I suspect a better exam experience!

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      AI was never about making our lives easier. It’s been a corporate tool out of the gate.

      But this kind of thing really should be handled by Washington, and maybe it would be if we had a functional Congress. Make it illegal for AI companies to sell their services to people for the purpose of cheating or impersonation.

      • Neve8028@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Make it illegal for AI companies to sell their services to people for the purpose of cheating or impersonation.

        How would that work? How do you know someone is cheating? AI can be a great studying tool and those same functions could be considered cheating based on the context that the user is using them in. There’s no way to tell what the user’s intent is.

  • MaggiWuerze@feddit.de
    link
    fedilink
    English
    arrow-up
    82
    ·
    1 year ago

    has led some college professors to reconsider their lesson plans for the upcoming fall semester.

    I’m sure they’ll write exams that actually require an actual understanding of the material rather than regurgitating the seminar PowerPoint presentations as accurately as possible…

    No? I’m shocked!

    • OhNoMoreLemmy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      49
      ·
      1 year ago

      We get in trouble if we fail everyone because we made them do a novel synthesis, instead of just repeating what we told them.

      Particularly for an intro course, remembering what you were told is good enough.

      • zigmus64@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        1 year ago

        The first step to understanding the material is exactly just remembering what the teacher told them.

        • Hemingways_Shotgun@lemmy.ca
          link
          fedilink
          English
          arrow-up
          23
          ·
          1 year ago

          Meh. I haven’t been in Uni in over 20 years. But it honestly seems kind of practical to me.

          Your first year is usually when you haven’t even settled on a major. Intro classes are less about learning and more about finding out if you CAN learn, and if you’ll actually like the college experience or drop out after your first year.

          The actual learning comes when the crowd has been whittled to those who have the discipline to be there.

          • umbrella@lemmy.ml
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            1 year ago

            I’m glad you had a better experience than mine on academia. Still wanting that time back.

            • Hemingways_Shotgun@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I would love to have that time and money back.

              One of the disadvantages of being of an age where you straddle the line between worlds without internet and with, is that you get to enjoy the 20,000 dollars you spent on learning in the 90s suddenly be available for free in the present.

              Seriously, there isn’t a single thing I learned in my Near Eastern Classical Archaeology degree that I couldn’t just go learn from Wikipedia today.

              • umbrella@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                I wish! I got roped into doing it after the Internet was available.

                Teachers halfass pretended to teach and we halfass pretended to learn because we tought that piece of paper at the end would make a difference.

                Turns out googling shit instead of being in debt was the way to go all along.

          • Dark Arc@social.packetloss.gg
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 year ago

            if you CAN learn

            I always found this argument completely unsatisfactory…

            Imagine someone coming up to you and saying “you must learn to juggle otherwise you can’t be a fisherman” and then after 14 years of learning to juggle, they say “you don’t actually need to know how to juggle, we just had to see if you CAN learn. Now I can teach you to fish.”

            You’d be furious. But, because we all grew up with this nonsense we just accept it. Everyone can learn, there’s just tons of stuff that people find uninteresting to learn, and thus don’t unless forced; especially when the format is extremely dry, unengaging, and you’ve already realized… You’re never going to need to know how to juggle to be a fisherman… ever.

            The show “Are you smarter than a fifth grader?” (IMO) accurately captures just how worthless 90% of that experience is to the average adult. I’ve forgotten so much from school, and that’s normal.

            The actual learning comes when the crowd has been whittled to those who have the discipline to be there.

            Also this is just ridiculous, “Everyone is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.”

            • Tavarin@lemmy.ca
              link
              fedilink
              English
              arrow-up
              9
              ·
              1 year ago

              You do realize you get to choose which courses to take in undergrad right? Universities aren’t forcing you to take any of the courses, you choose ones in subjects you are interested in, and first year is to get you up to speed/introduce you to those subjects, so you can decide if you want to study them further.

              once you have a major or specialist, then yeah, you have some required courses, but they do tend to be things very relevant to what you want to do.

              • Dark Arc@social.packetloss.gg
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 year ago

                You do realize you get to choose which courses to take in undergrad right? Universities aren’t forcing you to take any of the courses, you choose ones in subjects you are interested in, and first year is to get you up to speed/introduce you to those subjects, so you can decide if you want to study them further.

                That’s not true at all, every degree has a required core curriculum at every university I’ve ever heard of (e.g., humanities, some amount of math, some amount of English, etc). It also says nothing for the K-12 years.

                • Tavarin@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  In my university you had breadth requirements, but it was 1 humanities course, 1 social science, and 1 science, and you could pick any course within those areas to fulfill the requirement. So you had a lot of choice within the core curriculum. Man, if other unis aren’t doing that, that sucks.

    • Aurenkin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      My favourite lecturer at uni actually did that really well. He also said the exam was small and could be done in about an hour or two but gave us a 3 hour timeslot because he said he wanted us to take our time and think about each problem carefully. That was a great class.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      IME, a lot of professors liked to write exams that specifically didn’t cover anything from the PowerPoint presentations lol

    • Holyginz@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      1 year ago

      Millennial here, haven’t had to seriously write out anything consistently in decades at this point. There’s no way their handwriting can be worse than mine and still be legible lol.

      • crwcomposer@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        1 year ago

        As a millennial with gen Z teens, theirs is worse, though somehow not illegible, lol. They just write like literal 6 year olds.

      • Negrodamus@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Same and times I’ve had to write my hand cramped up so quickly from those muscles not being active for years

      • mwguy@infosec.pub
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        You’d be so surprised. From my interactions with my younger cousins and in laws, they can’t even write in cursive.

        • dragonflyteaparty@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          As much as I like using cursive, it’s not a necessary writing style and wasn’t taught to me in elementary. I’m 32, so it’s been out of the curriculum here for quite some time.

          • mwguy@infosec.pub
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            If you’re going to write, by hand multiple essays in a blue book/exam format throughout a 4-10 year post high school period. You need cursive. It’s faster, easier on the wrist and fingers and easier to read.

        • CaptainPedantic@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          I’m in the weird in between gen z and millennial. I only use cursive to sign my name and read grandma’s Christmas card. Frankly, it’s not useful for me. I’m glad we spent the time in school taking typing classes instead of cursive.

          What is crazy to me is that my youngest cousins (in their early teens) use the hunt and peck method to type. Despite that, they’re not super slow. I was absolutely shocked when I found that out. I think it was all the years of using a phone or tablet instead of an actual keyboard that created a habit.

          • mwguy@infosec.pub
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            What is crazy to me is that my youngest cousins (in their early teens) use the hunt and peck method to type.

            They don’t have typing classes anymore. Crazy I know. But my gen Z relatives do the same thing.

      • Ulv@feddit.nu
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Last week of school i found out my history teacher took all my handwritten things too the language teacher and had her copy it into legibility i felt so bad for that lady.

      • DominusOfMegadeus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I block print and vary caps and lowercase fairly randomly. I have particular trouble with the number 5. I guess it’s legible, but it sure ain’t pretty. It’s also fucking torture, and I would walk right out of school if this were done to me. Oh yeah, I’m Gen X.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      You’d be surprised. My daughter (13) has better penmanship than I do (46). Although I’m sure my left-handedness doesn’t help there.

  • TimewornTraveler@lemm.ee
    link
    fedilink
    English
    arrow-up
    62
    ·
    1 year ago

    Can we just go back to calling this shit Algorithms and stop pretending its actually Artificial Intelligence?

    • WackyTabbacy42069@reddthat.com
      link
      fedilink
      English
      arrow-up
      56
      ·
      1 year ago

      It actually is artificial intelligence. What are you even arguing against man?

      Machine learning is a subset of AI and neural networks are a subset of machine learning. Saying an LLM (based on neutral networks for prediction) isn’t AI because you don’t like it is like saying rock and roll isn’t music

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        If AI was ‘intelligent’, it wouldn’t have written me a set of instructions when I asked it how to inflate a foldable phone. Seriously, check my first post on Lemmy…

        https://lemmy.world/post/1963767

        An intelligent system would have stopped to say something like “I’m sorry, that doesn’t make any sense, but here are some related topics to help you”

        • WackyTabbacy42069@reddthat.com
          link
          fedilink
          English
          arrow-up
          16
          ·
          1 year ago

          AI doesn’t necessitate a machine even being capable of stringing the complex English language into a series of steps towards something pointless and unattainable. That in itself is remarkable, however naive it may be in believing you that a foldable phone can be inflated. You may be confusing AI for AGI, which is when the intelligence and reasoning level is at or slightly greater than humans.

          The only real requirement for AI is that a machine take actions in an intelligent manner. Web search engines, dynamic traffic lights, and Chess bots all qualify as AI, despite none of them being able to tell you rubbish in proper English

          • TimewornTraveler@lemm.ee
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            1 year ago

            The only real requirement for AI is that a machine take actions in an intelligent manner.

            There’s the rub: defining “intelligent”.

            If you’re arguing that traffic lights should be called AI, then you and I might have more in common than we thought. We both believe the same things: that ChatGPT isn’t any more “intelligent” than a traffic light. But you want to call them both intelligent and I want to call neither so.

            • throwsbooks@lemmy.ca
              link
              fedilink
              English
              arrow-up
              12
              ·
              1 year ago

              I think you’re conflating “intelligence” with “being smart”.

              Intelligence is more about taking in information and being able to make a decision based on that information. So yeah, automatic traffic lights are “intelligent” because they use a sensor to check for the presence of cars and “decide” when to switch the light.

              Acting like some GPT is on the same level as a traffic light is silly though. On a base level, yes, it “reads” a text prompt (along with any messaging history) and decides what to write next. But that decision it’s making is much more complex than “stop or go”.

              I don’t know if this is an ADHD thing, but when I’m talking to people, sometimes I finish their sentences in my head as they’re talking. Sometimes I nail it, sometimes I don’t. That’s essentially what chatGPT is, a sentence finisher that happened to read a huge amount of text content on the web, so it’s got context for a bunch of things. It doesn’t care if it’s right and it doesn’t look things up before it says something.

              But to have a computer be able to do that at all?? That’s incredible, and it took over 50 years of AI research to hit that point (yes, it’s been a field in universities for a very long time, with most that time people saying it’s impossible), and we only hit it because our computers got powerful enough to do it at scale.

              • ParsnipWitch@feddit.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                Intelligence is more about taking in information and being able to make a decision based on that information.

                Where does that come from? A better gauge for intelligence is whether someone or something is able to resolve a problem that they did not encounter before. And arguably all current models completely suck at that.

                I also think the word “AI” is used quite a bit too liberal. It confuses people who have zero knowledge on the topic. And when an actual AI comes along we will have to make up a new word because “general artificial intelligence” won’t be distinctive enough for corporations to market their new giant leap in technology….

                • throwsbooks@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  I would suggest the textbook Artificial Intelligence: A Modern Approach by Russell and Norvig. It’s a good overview of the field and has been in circulation since 1995. https://en.m.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach

                  Here’s a photo, as an example of how this book approaches the topic, in that there’s an entire chapter on it with sections on four approaches, and that essentially even the researchers have been arguing about what intelligence is since the beginning.

                  But all of this has been under the umbrella of AI. Just because corporations have picked up on it, doesn’t invalidate the decades of work done by scientists in the name of AI.

                  My favourite way to think of it is this: people have forever argued whether or not animals are intelligent or even conscious. Is a cat intelligent? Mine can manipulate me, even if he can’t do math. Are ants intelligent? They use the same biomechanical constructs as humans, but at a simpler scale. What about bacteria? Are viruses alive?

                  If we can create an AI that fully simulates a cockroach, down to every firing neuron, does it mean it’s not AI just because it’s not simulating something more complex, like a mouse? Does it need to exceed a human to be considered AI?

            • sin_free_for_00_days@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              7
              ·
              1 year ago

              I’m with you on this and think the AI label is just stupid and misleading. But times/language change and you end up being a Don Quixote type figure.

        • XTornado@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          If it shouldn’t be called AI or not no idea…

          But some humans are intel·ligent and let’s be clear…say crazier things…

        • Jordan Lund@lemmy.one
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          Inflating a phone is super easy though!

          Overheat the battery. ;) Phone will inflate itself!

        • jarfil@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          If “making sense” was a requirement of intelligence… there would be no modern art museums.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Instructions unclear, inflates phone.

            Seriously, if it was actually intelligent, yet also writing out something meant to be considered ‘art’, I’d expect it to also have a disclaimer at the end declaring it as satire.

            • jarfil@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              That would require a panel of AIs to discuss whether “/s” or “no /s”…

              As it is, it writes anything a person could have written, some of it great, some of it straight from Twitter. We are supposed to presume at least some level of intelligence for either.

      • TimewornTraveler@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        1 year ago

        I am arguing against this marketing campaign, that’s what. Who decides what “AI” is and how did we come to decide what fits that title? The concept of AI has been around a long time, like since the Greeks, and it had always been the concept of a made-made man. In modern times, it’s been represented as a sci-fi fantasy of sentient androids. “AI” is a term with heavy association already cooked into it. That’s why calling it “AI” is just a way to make it sound high tech futuristic dreams-come-true. But a predictive text algorithm is hardly “intelligence”. It’s only being called that to make it sound profitable. Let’s stop calling it “AI” and start calling out their bullshit. This is just another crypto currency scam. It’s a concept that could theoretically work and be useful to society, but it is not being implemented in such a way that lives up to its name.

        • BigNote@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          The field of computer science decided what AI is. It has a very specific set of meanings and some rando on the Internet isn’t going to upend decades of usage just because it doesn’t fit their idea of what constitutes AI or because they think it’s a marketing gimmick.

          It’s not. It’s a very specific field in computer science that’s been worked on since the 1950s at least.

    • Venia Silente@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Please let’s not defame Djikstra and other Algorithms like this. Just call them “corporate crap”, like what they are.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Maybe machine learning models technically fit the definition of “algorithm” but it suits them very poorly. An algorithm is traditionally a set of instructions written by someone, with connotations of being high level, fully understood conceptually, akin to a mathematical formula.

      A machine learning model is a soup of numbers that maybe does something approximately like what the people training it wanted it to do, using arbitrary logic nobody can expect to follow. “Algorithm” is not a great word to describe that.

  • aulin@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    1 year ago

    There are places where analog exams went away? I’d say Sweden has always been at the forefront of technology, but our exams were always pen-and-paper.

    • CapeWearingAeroplane@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Norway has been pushing digital exams for quite a few years, to the point where high school exams went to shit for lots of people this year because the system went down and they had no backup (who woulda thought?). In at least some universities most of or all exams have been digital for a couple years.

      I think this is largely a bad idea, especially on engineering exams, or any exam where you need to draw/sketch or write equations. For purely textual exams, it’s fine. This has also lead to much more multiple-choice or otherwise automatically corrected questions, which the universities explicitly state is a way of cutting costs. I think that’s terrible, nothing at university level should be reduced to a multiple choice question. They should be forbidden.

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      The university I went to explicitly did in person written exams for pretty much all exams specifically for anti-cheating (even before the age of ChatGPT). Assignments would use computers and whatnot, but the only way to reliably prevent cheating is to force people to write the exams in carefully controlled settings.

      Honestly, probably could have still used computers in controlled settings, but pencil and paper is just simpler and easier to secure.

      One annoying thing is that this meant they also usually viewed assignments as untrusted and thus not worth much of the grade. You’d end up with assignments taking dozens of hours but only worth, say, 15% of your final grade. So practically all your grade is on a couple of big, stressful exams. A common breakdown I had was like 15% assignments, 15% midterm, and 70% final exam.

  • Rozz@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    53
    ·
    1 year ago

    Am I wrong in thinking student can still generate an essay and then copy it by hand?

    • CrimsonFlash@lemmy.ca
      link
      fedilink
      English
      arrow-up
      56
      ·
      1 year ago

      Not during class. Most likely a proctored exam. No laptops, no phones, teacher or proctor watching.

      • Syrc@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        …then why can’t you do that with a school laptop that can’t access the web…?

          • Syrc@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            And so do colleges. If they don’t want to invest $2000 every 5/6 years for a hundred dumpster windows 95 PCs it shouldn’t be the paying student to suffer.

    • drekly@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Sounds like effort, I’m making a font out of my handwriting and getting a 3d printer to write it

      • Rozz@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Obviously that is the next step for the technically inclined, but even the less inclined may be capable of generating them copying to save time and brain effort.

  • neptune@dmv.social
    link
    fedilink
    English
    arrow-up
    51
    ·
    1 year ago

    This isn’t exactly novel. Some professors allow a cheat sheet. But that just means that the exam will be harder.

    Physics exam that allows a cheat sheet asks you to derive the law of gravity. Well, OK, you write the answer at the bottom pulled from you cheat sheet. Now what? If you recall how it was originally created you probably write Newtons three laws at the top of your paper… And then start doing some math.

    Calculus exam that let’s you use wolfram alpha? Just a really hard exam where you must show all of your work.

    Now, with ChatGPT, it’s no longer enough to have a take home essay to force students to engage with the material, so you find news ways to do so. Written, in person essays are certainly a way to do that.

    • NocturnalMorning@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Hate to break it to you, but you picked probably the one law in physics that is empirically derived. There is no mathematical equation to derive newton’s law of gravity.

      • neptune@dmv.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Yes but you can still start with Kepler and newton’s three laws and with basic math skills recreate the equation. I know, because it was on a physics exam I took ten years ago.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Or you have the classic, “you can write anything down that you’d like, but it won’t tell you how to answer the questions”.

      And, in fact, it doesn’t help at all beyond a few formulas. I was ChemE, our cheat sheets never saved us.

  • Mugmoor@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    43
    ·
    1 year ago

    When I was in College for Computer Programming (about 6 years ago) I had to write all my exams on paper, including code. This isn’t exactly a new development.

    • whatisallthis@lemm.ee
      link
      fedilink
      English
      arrow-up
      30
      ·
      1 year ago

      So what you’re telling me is that written tests have, in fact, existed before?

      What are you some kind of education historian?

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        He’s not pointing out that handwritten tests are not something new, but that using handwritten tests over typing them to reflect the student’s actual abilities is not new.

    • lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      I had some teachers ask for handwritten programming exams too (that was more like 20 years ago for me) and it was just as dumb then as it is today. What exactly are they preparing students for? No job will ever require the skill of writing code on paper.

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        1 year ago

        What exactly are they preparing students for? No job will ever require the skill of writing code on paper.

        Maybe something like, a whiteboard interview…? They’re still incredibly common, especially for new grads.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          A company that still does whiteboard interviews one I have no interest in working for. When I interview candidates I want to see how they will perform in their job. Their job will not involve writing code on whiteboards, solving weird logic problems, or knowing how to solve traveling salesman problem off the top of their heads.

          • Dark Arc@social.packetloss.gg
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            That’s a valid opinion, and I largely share it. But, all these students need to work somewhere. This is something the industry needs to change before the school changes it.

            Also, I’ve definitely done white board coding discussions in practice, e.g., go into a room, write up ideas on the white board (including small snippets of code or pseudo code).

            • lunarul@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              I’ve definitely done white board coding discussions in practice, e.g., go into a room, write up ideas on the white board (including small snippets of code or pseudo code).

              I’ve done that too back before the remote work era, but using a whiteboard as a visual aid is not the same thing as solving a whole problem on a whiteboard.

              • Dark Arc@social.packetloss.gg
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                It’s a close enough problem; a lot of professors I’ve known aren’t going to sweat the small stuff on paper. Like, they’re not plugging your code into a computer and expecting it to build, they’re just looking for the algorithm design, and that there’s no grotesque violation of the language rules.

                Sure, some are going to be a little harder “you missed a simicolon here”, but even then, if you’re doing your work, that’s not a hard thing to overcome, and it’s going to cost you a handful of points (if that sort of stuff is your only mistake you get a 92 instead of a 100).

          • pinkdrunkenelephants@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            And what happens when you run into the company that wants people who can prove they conceptually understand what the hell it is they’re doing on their own, which requires a whiteboard?

            I program as a hobby and I’ll jot down code and plans for programs on paper when I am out and about during the day. The fuck kind of dystopian hellhole mindset do you have where you think all that matters is doing the bare minimum to survive? You know that life means more than that, don’t you?

            • lunarul@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              The ability to conceptually understand what they’re doing is exactly what I’m testing for when interviewing. Writing a full program on a whiteboard is definitely not required for that. I can get that from asking them question, observing how they approach the problem, what kind of questions they ask me etc.

              I definitely don’t want them to do just the bare minimum to survive or to need to ask me for advice at every step (had people who ended up taking more of my time than it would’ve taken me to do their job myself).

              I’ve never needed to write more than a short snippet of code at a time on a whiteboard, slack channel, code review, etc. in my almost 20 years in the industry. Definitely not to solve a whole problem blindly. In fact I definitely see it as a red flag when a candidate writes a lot of code without ever stopping to execute and test each piece individually. It simply becomes progressively more difficult to debug the more you add to it, that’s common sense.

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Which is equally useless. In the end you’re developing a skill that will only be used in tests. You’re training to be evaluated instead of to do a job well.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I personally never had a problem performing well in those tests, I happen to have the skill to compile code in my head, and it is a helpful skill in my job (I’ve been a software engineer for 19 years now), but it’s definitely not a required skill and should not be considered as such.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Same. All my algorithms and data structures courses in undergrad and grad school had paper exams. I have a mixed view on these but the bottom line is that I’m not convinced they’re any better.

      Sure they might reflect some of the student’s abilities better, but if you’re an evaluator interested in assessing student’s knowledge a more effective way is to make directed questions.

      What ends up happening a lot of times are implementation questions that ask from the student too much at once: interpretation of the problem; knowledge of helpful data structures and algorithms; abstract reasoning; edge case analysis; syntax; time and space complexities; and a good sense of planning since you’re supposed to answer it in a few minutes without the luxury and conveniences of a text editor.

      This last one is my biggest problem with it. It adds a great deal of difficulty and stress without adding any value to the evaluator.

  • Four_lights77@lemm.ee
    link
    fedilink
    English
    arrow-up
    42
    ·
    1 year ago

    This thinking just feels like moving in the wrong direction. As an elementary teacher, I know that by next year all my assessments need to be practical or interview based. LLMs are here to stay and the quicker we learn to work with them the better off students will be.

    • pinkdrunkenelephants@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      28
      ·
      1 year ago

      And forget about having any sort of integrity or explaining to kids why it’s important for them to know how to do shit themselves instead of being wholly dependent on corporate proprietary software whose accessibility can and will be manipulated to serve the ruling class on a whim 🤦

      • Not_Alec_Baldwin@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        edit-2
        1 year ago

        It’s insane talking to people that don’t do math.

        You ask them any mundane question and they just shrug, and if you press them they pull out their phone to check.

        It’s important that we do math so that we develop a sense of numeracy. By the same token it’s important that we write because it teaches us to organize our thoughts and communicate.

        These tools will destroy the quality of education for the students that need it the most if we don’t figure out how to reign in their use.

        If you want to plug your quarterly data into GPT to generate a projection report I couldn’t care less. But for your 8th grade paper on black holes, write it your damn self.

        • pinkdrunkenelephants@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          Putting quarterly data into ChatGPT is dangerous for companies because that information is being fed into the AI and accessible by its creators, which means you’re just giving away proprietary information and trade secrets by doing that. But do these chucklefucks give one single shit? No. Because they’re selfish, lazy assholes that want robots to do their thinking for them.

          • joel_feila@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Well with more and more data abd services via the cloud, company don’t seem to care about data sharing

      • jarfil@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        wholly dependent on corporate proprietary software

        FLOSS would want a word with you.

        • pinkdrunkenelephants@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          The way we have allowed corporations to take over the internet as a whole is deeply problematic for those reasons too, I agree with you. And it’s awful seeing what we’ve become.

    • SamC@lemmy.nz
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Good luck doing one on one assessments in a uni course of 300+

  • UsernameIsTooLon@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    1 year ago

    You can still have AI write the paper and you copy it from text to paper. If anything, this will make AI harder to detect because it’s now AI + human error during the transferring process rather than straight copying and pasting for students.

    • Zacryon@feddit.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Noooo. That’s a genious countermeasure without any obvious drawbacks!!1! /s

  • Mtrad@lemm.ee
    link
    fedilink
    English
    arrow-up
    31
    ·
    1 year ago

    Wouldn’t it make more sense to find ways on how to utilize the tool of AI and set up criteria that would incorporate the use of it?

    There could still be classes / lectures that cover the more classical methods, but I remember being told “you won’t have a calculator in your pocket”.

    My point use, they should prepping students for the skills to succeed with the tools they will have available and then give them the education to cover the gaps that AI can’t solve. For example, you basically need to review what the AI outputs for accuracy. So maybe a focus on reviewing output and better prompting techniques? Training on how to spot inaccuracies? Spotting possible bias in the system which is skewed by training data?

    • Atomic@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      That’s just what we tell kids so they’ll learn to do basic math on their own. Otherwise you’ll end up with people who can’t even do 13+24 without having to use a calculator.

      • Overzeetop@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        people who can’t even do 13+24 without having to use a calculator

        More importantly, you end up with people who don’t recognize that 13+24=87 is incorrect. Math->calculator is not about knowing the math, per se, but knowing enough to recognize when it’s wrong.

        I don’t envy professors/teachers who are hacing to figure out novel ways of determining the level of mastery of a class of 30, 40, or 100 students in the era of online assistance. Because, really, we still need people who can turn out top level, accurate, well researched documentation. If we lose them, who will we train the next gen LLM on? ;-)

      • Arthur_Leywin@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        When will people need to do basic algebra in their head? The difficulty between 13+24 and 169+ 742 rises dramatically. Yeah it makes your life convenient if you can add simple numbers, but is it necessary when everyone has a calculator?

        • Atomic@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Like someone said. It’s not just about knowing what something is, but having the ability to recognize what something isn’t.

          The ability to look at a result and be skeptical if it doesn’t look reasonable.

          169+742. Just by looking I can tell it has to be pretty close to 900 because 160+740 is 900. That gives me a good estimate to go by. So when I arrive at 911. I can look at it and say. Yeah. That’s probably correct, it looks reasonable.

        • Mtrad@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          That sounds like ot could be a focused lesson. Why try to skirt around what the desired goal is?

          That also could be placed into detecting if something is wrong with AI too. Teach people things to just help spot these errors.

          In my experience, it’s so much more effective to learn how to find the answers and spot the issues than to memorize how to do everything. There’s too much now to know it all yourself.

    • Revv@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      Training how to use “AI” (LLMs demonstrably possess zero actual reasoning ability) feels like it should be a seperate pursuit from (or subset of) general education to me. In order to effectively use “AI”, you need to be able to evaluate its output and reason for yourself whether it makes any sense or simply bears a statitstical resemblance to human language. Doing that requires solid critical reasoning skills, which you can only develop by engaging personally with countless unique problems over the course of years and working them out for yourself. Even prior to the rise of ChatGPT and its ilk, there was emerging research showing diminishing reasoning skills in children.

      Without some means of forcing students to engage cognitively, there’s little point in education. Pen and paper seems like a pretty cheap way to get that done.

      I’m all for tech and using the tools available, but without a solid educational foundation (formal or not), I fear we end up a society snakeoil users in search of the blinker fluid.

    • settxy@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      There are some universities looking at AI from this perspective, finding ways to teach proper usage of AI. Then building testing methods around the knowledge of students using it.

      Your point on checking for accuracy is on point. AI doesn’t always puke out good information, and ensuring students don’t just blindly believe it NEEDS to be taught. Otherwise wise you end up being these guys… https://apnews.com/article/artificial-intelligence-chatgpt-courts-e15023d7e6fdf4f099aa122437dbb59b

    • Snekeyes@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      It’s like the calculator in the 80s and 90s. Teacher would constantly tell us “no jobs just gonna let you use a calulator, they’re paying you to work”…

      I graduated, and really thought companies were gonna make me do stuff by hand, cause calulators made it easy. Lol.

  • thedirtyknapkin@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 year ago

    as someone with wrist and hand problems that make writing a lot by hand, I’m so lucky i finished college in 2019