I have worked with loads of AI tools and even use some in my day-to-day work. Although it’s not a requirement to use any of them, I find them pretty useful. Let’s talk about fun tools first. The tool l probably spent the most money on generating fun, interesting, and overall engaging images in Dall-E. From adding fighter aircraft in the background to create a “WW2” inspired background, to creating a whole new background and a picture by just using simple prompts to create a masterpiece that is currently sitting on my desk. I used Descript AI to help my girlfriend who is a journalist transcribe an interview. It is an amazing tool that works amazingly well with Croatiancomm (just not too well when you mix Croatian with English).
Chat GPT made me disappointed in Google search results. Quite often, it gives me a good enough answer faster than Google. However, the catch is in the ‘good enough’ as you have to check and verify if the result is correct as there are ways to make Chat GPT tell you that 2+2 is 5.
The most useful tool for me was Github CoPilot, as a lot of programming is often quite repetitive or rather simple. This tool makes that work fun again as you can offload at least 80% of that simple and repetitive job to the GitHub CoPilot, do you need a function that needs to return the organization list filtered by type. It can write the exact function in the code style you use. Quite helpful to keep the boring work done and out of the way, leaving you with more complex stuff to write.
They make most of the tasks, in terms of programming, easier and faster. But, when they do break down or do not understand what you want exactly, they do quite the opposite!
Mostly related to input from my side. If you give them a garbage prompt, you will get garbage out. It takes time to learn how to efficiently ask, receive and digest the output of any current AI tool.
Not many striking differences or stories to talk about, but we have used Chat GPT from time to time to explain something that is rather harder to understand at first. Recently we had an API route that is growing day by day with new restrictions per role and we were using a lot of "if statements" which made it quite hard to understand which user uses what filters. Quickly copying the block to Chat GPT and asking it to rewrite it to a more maintainable "switch statement" (which should have been in the first place), gave us a good visual look into how it could work and look. Using this visual view which was not 100% correct gave a good confidence that rewriting it would give us a benefit where this part of the code would be more maintainable.
I see huge potential, especially in removing dull, repetitive, and rather simpler parts of the work that we have to do daily. I also see it as a good research tool as it can give you ideas insanely quickly. However, missing accuracy or misunderstandings are inevitable, making us humans still necessary in the interaction between AI, tools, and the end product. I would compare it to Google.
We’ve had it for years if not decades, and you could say that It should have disrupted the market by allowing anyone to be knowledgeable. However, only those who know how to use the tool like Google profit, - most don’t use it for research and learning. The same can be said for AI tools - in both cases, you need to be able to vet the results and output and/or change the prompt to get useful results. Google has been an amazing tool in my career, work, and overall life. I see the same outlines from AI tools and I can’t wait for them to get better.
Yes and no. I see the situation with new tools as I would with any other. They may assist in completing tasks, but AI software cannot fully replicate the exact results we desire from pouring our ideas or thoughts into it. Without the capability for AI to read the precise idea we have in our minds regarding software, a feature, a fix, a bug, or anything in between, I do not believe full automation is possible. Those who wield and use these tools will still be necessary. Startups like Neuralink may be able to provide the final puzzle piece, which is a direct connection between the brain and software.
However, I am very skeptical about this. AI's learning from text, research, internet access, social media, and other training is only one aspect of the human brain. To achieve the same output as humans, but more quickly, we must unlock and incorporate emotions, creativity, and logic. Once we establish a "brain → machine" link, we could accelerate development and engineering substantially. And once AI learns to emulate our brains through this link, automation will be possible. Yes, most of the ideas above are SF for now, but the possibilities are endless. It depends on how much they will cost and how willing people are to develop them.
From my perspective, the only issue is privacy. It is difficult to obtain proper training and easy to access and use data that should not be used. The battle for privacy on the internet is lost - too much money is on the anti-privacy side, and government entities like the EU or Germany are too slow to adapt and protect others with legislation.
The triangle of AI progress:
However, the development of AI and its tools is bound to raise moral, ethical, and privacy concerns. One aspect of this issue is the availability of funding and processing power, while the other is the quality and suitability of training data.
The triangle of AI progress:
Get all three, keep them balanced