03-26-2023 10:53 AM - edited 03-26-2023 10:53 AM
@Jay14159265 wrote:
I agree that in 5 years ChatGPT, Brad, Bing will be prehistoric technology but not because it was hype. In
5 years2 years we will all be working for our benevolent AI overlords : )
Hmm you are optimistic about AI’s capabilities. 😀 Or should that rather read pessimistic?
Anyhow, Colossus will be a good find to end that bickering and war mongering of power hungry people all over the planet. It may be the only chance for humanity to survive the next 500 years in any way.
03-26-2023 11:43 AM
@rolfk wrote:
@Jay14159265 wrote:
I agree that in 5 years ChatGPT, Brad, Bing will be prehistoric technology but not because it was hype. In
5 years2 years we will all be working for our benevolent AI overlords : )Hmm you are optimistic about AI’s capabilities. 😀 Or should that rather read pessimistic?
Anyhow, Colossus will be a good find to end that bickering and war mongering of power hungry people all over the planet. It may be the only chance for humanity to survive the next 500 years in any way.
Well I am optimistic that even if its not 'real' AI, someone is still going to put it in charge.
Had a conversation with a lawyer the other day and he claimed the reason AI will not take over is that as of yet, it's impossible to hold an AI accountable for its actions (as in, sue the crap out of it. This was in the context of self driving cars). So that counts the United States out from leading the pack of AI controlled everthing.
03-27-2023 01:33 AM
He didn't say that the AI will be put in charge 😄
He implied it will put itself in charge.
03-27-2023 02:44 AM
@Jay14159265 wrote:
@rolfk wrote:
@Jay14159265 wrote:
I agree that in 5 years ChatGPT, Brad, Bing will be prehistoric technology but not because it was hype. In
5 years2 years we will all be working for our benevolent AI overlords : )Hmm you are optimistic about AI’s capabilities. 😀 Or should that rather read pessimistic?
Anyhow, Colossus will be a good find to end that bickering and war mongering of power hungry people all over the planet. It may be the only chance for humanity to survive the next 500 years in any way.
Well I am optimistic that even if its not 'real' AI, someone is still going to put it in charge.
Had a conversation with a lawyer the other day and he claimed the reason AI will not take over is that as of yet, it's impossible to hold an AI accountable for its actions (as in, sue the crap out of it. This was in the context of self driving cars). So that counts the United States out from leading the pack of AI controlled everthing.
Freedom from accountability might be the exact reason an AI would be put in charge? If anything doesn't work out, the politicians (or whoever) can just throw their hands up and say, "The AI did it".
03-27-2023 03:31 AM
GPT and assistants that text based programming languages are getting is another nail in the coffin for LabVIEW. Just take a look what tools are avaliable:
https://github.com/features/preview/copilot-x
https://www.youtube.com/watch?v=4RfD5JiXt3A&ab_channel=GitHub
Soon most of boilerplate code will be generated. Unfortunately not for LabVIEW.
03-27-2023 03:47 AM
@pawhan11 wrote:
Soon most of boilerplate code will be generated. Unfortunately not for LabVIEW.
That could be a very good thing actually. Much of that boilerplate code will be put in place by people who understand a crap about programming and will believe because it compiles it is correct.
03-27-2023 04:09 AM
It does not matter how correct is the code. I may as well accept assembler or any other non human-readable low level output, as long as
1) it solves the problem
2) does so CPU/RAM/HDD efficiently
It reminds me those LabView Architect I was working under who forced us to design all UI elements 2-3 days long just for them to be AF-based and modular. Even it were 1time throw away simple debug menus for the testrig. Especially nice was the fact that you had to work sefveral days 24/7 straight, cause the project was overdue long time ago. Presumably cause of the same "it should be ideally correct" reasons.
03-27-2023 04:20 AM - edited 03-27-2023 04:22 AM
Tend do disagree on generation. You will need less people to do the same, one skilled developer that will know what he wants + the right querries to those models and will check that code later.
My experience after few days of using gpt4 for .net and unit teste generation it gets it right and good 90% of time. I have only 2 years exp in text based after switching from 10 years in labview, when i compare time to write this boring code by myself vs generation + output verification it is so much faster.
I expect it might go into direction whare we will more focus on interesting part like architecture, system design and code generation will be assisted by those models. Now it is way better to query model vs stackoverflow and google search for most of my cases.
03-27-2023 04:43 PM - edited 03-27-2023 04:53 PM
I work in an academic/research environment, and LabVIEW was very cheap for us (software at ~90% to 75% off, and courses at 75% off). Currently, the best rates quoted by NI themselves are 30% off, and their distributors will not go more than 10% off. NI single-handedly killed the use of LabVIEW in academy. As others have said several times, no LabVIEW in academy -> no new users -> further decline in the adoption and use of LabVIEW. Maybe NI will keep in-house developers for the big turnkey installations, but I won't be surprised if the LabVIEW programmers in the wild are gone in 20-30 years.
03-27-2023 04:59 PM
If the COVID-19 vaccine manufacturers can weasel their way out of any liability for their vaccines, and still be able to sell it, what is to prevent the same for AI?