Random Ramblings on LabVIEW Design

Community Browser
Labels
cancel
Showing results for 
Search instead for 
Did you mean: 

Re: AI is going to take our jobs - everyone should panic now.

swatts
Active Participant

Hello Panicking Programmers,

I have click-baited you in to offer a slightly less excitable view of the world!

 

An old adage is that a good engineer is a lazy engineer and like most truisms there is an element of accuracy to the statement, but I think it also lacks completeness. A good engineer acts as if they are lazy, but uses the time they save to do more stuff.

 

Because laziness is an issue in my experience, I take a muscular view of the world in that I think the brain is a muscle and it needs exercise to get better. Without that exercise/stimulation it will atrophy and very quickly become reliant on being fed information. As an example spell-checkers and SatNavs have actually made my spelling and navigation worse.

 

Currently we are being sold that various Large Language Models will take our jobs. The correlation from people who don't know better is that ChatGPT can write a simple program, therefore all programming will soon be done by ChatGPT. Spoiler Alert: it's not quite so simple!

 

First a word about the people selling this, they seem to be the same people who were selling Bitcoin, blockchain a while back. Similarly sharp companies putting AI in their marketing to fool investors. I now view it as a giant red-flag! And I apologise for the good people in marketing and computer science, soon the chancers will find some new nonsense to market and the business will normalise again.

 

Generated by izea.comGenerated by izea.com

izea.com

 

Future Egg (co ChatGPT)

 

"Imagine an egg, perhaps nestled on a high-tech, futuristic surface. The egg could be glowing softly, emitting a gentle, warm light. Surrounding the egg are intricate patterns or symbols representing the algorithms and neural networks of artificial intelligence.

Furthermore, you might envision faint digital tendrils extending from the egg, symbolizing the connectivity and intelligence that AI brings to enhance and optimize processes.

Feel free to envision the egg's surroundings as vibrant and filled with the energy of technology and innovation, all focused on the humble yet powerful symbol of potential: the egg."

 

 

So why the scepticism?.

 

As I have written before, people having been trying to make me redundant for about 40 years now, and yet I'm still as busy and the work hasn't changed that much. Where are the grand advances?

 

1) I also think LLMs are good at some things, but they are usually things with little consequence, where accuracy doesn't really matter. If I were a manager I'd be quite worried TBH. Whereas we deal in software attached to physical things, that potential inaccuracy has consequences.

 

swatts_1-1709630662039.png

 

2) Going back to the issue of laziness --> "Consider checking important information", how do you judge what's important if you don't understand something completely. And understanding completely is the hard work of software engineering.

 

There are no shortcuts to understanding I'm afraid, it's just effort. 

 

I can predict a lot of expensive rescue work in my future, and that doesn't come cheap!

 

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Comments
Intaris
Proven Zealot

I think the question of legal liability is going to be a major barrier for AI.

For homework perhaps less problematic, for medical software perhaps more.

leahmedwards
Member

Well said Steve. I don't think I'll worry about my job until computers can magically read the software requirements from customers brains and ask the right questions to fill in the gaps.  

 

AI does disturb me a bit, but I think worrying about the future of AI right now is a big distraction from actual existential threats like climate change. Potential AI threats get an awful lot of media attention. 

al_g
Member

NI has never been shy about embracing the new shiny. Remember LV 6i. Then there's this session at the upcoming NI Connect.

 

al_g_0-1709665484068.png

 

swatts
Active Participant

I think there is a place for AI if the help offered is trustworthy, I completely lost interest the moment I found out it can make stuff up.

Maybe saying "I don't know" takes a more evolved intelligence.

 

But I think generating code might be a good use case for experienced programmers, I suspect it might be a poisoned chalice for newer programmers.

Steve


Opportunity to learn from experienced developers / entrepeneurs (Fab,Joerg and Brian amongst them):
DSH Pragmatic Software Development Workshop


Random Ramblings Index
My Profile

Taggart
Trusted Enthusiast

100% Steve. I’d rather less useful but trustworthy. Trustworthy is not optional. If it is not trustworthy I don’t really care what problems it solves - how do you know it actually solves them?

 

 

in the text world I do know some  people who are writing their own tests and then using an LLM to write code to make the test pass. That shows some promise but you still need someone smart enough to come up with the test cases and write the tests. At best AI is an assistant not a replacement. At worst well … imagine a world that just feeds on its own shit.

Sam Taggart
CLA, CPI, CTD, LabVIEW Champion
DQMH Trusted Advisor
Read about my thoughts on Software Development at sasworkshops.com/blog
GCentral
justACS
Active Participant

Unrelated to LabVIEW, but so far, I've only used AI in my D&D games.  One of my use cases is to summarize game sessions as news articles from the city's tabloid broadsheet.  That the resulting stories are wildly inaccurate is part of the gag!

Dhakkan
Member

My professional use case for ChatGPT specifically, is a similar summarization done in the health-care industry - creation of SOAP Notes. It really shaves off a ton of human effort in this area. The manual effort gets limited to reviewing and, only if necessary, editing these notes.

 

On a personal level, I've found the tool to be immensely useful when facing writer's block for seemingly simple stuff like writing professional emails or finding more appropriate phrases.

 

For general software engineering, I find the tool useful for 'brainstorming' in the style of a conversation. It sometimes throws terms at me that I'd never heard of, let alone considered. Some of these, I've found to be well worthy of further consideration. On many an occasion, I've found this approach to be 'easier' than going through successive searches from my favorite search engine.

 

For text-based programming, the AI tools I've seen use the function name to suggest a code block. Depending on tool sophistication, one can iterate through a bunch of such blocks until a preferred one is found. The selected block would still need to be unit tested though, as there's no guarantee of vetting by the AI tool itself. It would be cool though, if the tool authors could have their tools present real-time validated test case scenarios and results!

Jacobson
Member

The correctness or trustworthiness for generated code doesn't bother me too much (it does for non-code generated info). Typically, I'm only looking to generate small sections of code so I'll trust the results about as much as some code I pulled off of Stack Overflow. As an example, yesterday I used ChatGPT to take an input and round it up to the nearest power of 2. I expect the code will probably be close at least but there's no way I'm just going to copy/paste and call it a day.

 

There are a lot of problems where creating a solution can be much more difficult than checking the solution so as long as the code's close I find it helpful. The example I remember is how solving a sudoku might be very challenging but if someone give you a solution it's rather trivial to check that it's done correctly.