It was almost Christmas when somebody forwarded an email to me. It was an open call by a Hungarian student organization to an Israeli renewable energy hackathon. Knowing that I’d always made good memories while attending hackathons, I applied by sending an essay and my CV and eventually got invited to the second round. There, we had two hours to come up with and present an innovative idea. Although my idea was mediocre (a green community-focused bank only investing in renewable projects and implicitly enforcing regulations), I became one of the two Hungarian delegates.

So far good. I had a week-long trip to Israel, but the main event’s topic was hydrogen. I researched a bit to (at least) know what the color codes meant, but I assumed I could just let others do the chemistry and biology, and I’ll use my software engineering skills to build a prototype. In hindsight, this was a bold assumption. Nobody from our team of four was into chemistry or biology, and our specific topic was a tad technical: CO2 capturing and utilization in the blue hydrogen production process. Uhm, okay.

All this happened in February-March, and ChatGPT had already become the cool kid on the block. It was an obvious choice that we could ask for some initial help and build on top of that. Soon (particularly during the in-person event), GPT-3.5 became our fifth team member. Whenever we needed ideas, didn’t understand something, or had some questions, we had an expert we could talk to. We got a new teammate who could fill our knowledge gap in chemistry.

It wasn’t as easy as telling it the high-level task and leaning back while it did us the work. We still had to dive deep into the topic and understand the connection. But it saved us hours of research for specific questions, which allowed us to familiarize ourselves with the subject much faster, which was invaluable. To make a software engineering analogy: we couldn’t tell it that we wanted a fancy app with features X, Y, and Z, and watch the GitHub repo getting filled with source code. But we could ask it to help us solve specific, much lower-level problems.

There were also some limitations we experienced. Personally, the most annoying were hallucinations. I was aware of the phenomenon, but I didn’t expect it to not only make up research paper titles and author names but also tell me that it appeared in an existing journal’s particular edition. Coming to this realization nine hours before the presentation, at 3 in the morning, wasn’t particularly awesome. I also found that when used with Bing AI, the facts spit out by GPT-3.5 were often easy to verify. I also wish I had discovered ChatPDF while there. It could’ve saved us lots of time skimming through papers looking for a particular piece of information.

Our team got third in the end, which was completely unexpected after getting to know others (biology and chemistry olympiads, khm). I’m confident that a significant portion of this was owing to our fifth member, GPT-3.5. I read the thought a few times that in the future, centaurs, human-machine duos will work. To me, it’s fascinating that I got to experience it so soon. I’m looking forward to how this even faster “knowledge retrieval” will increase the velocity of innovation. One could argue that these models are just “echoing” their training data and are unable to create something new. But what if this is enough? What if humans come up with new ideas, while LLMs give them an efficient way to access existing knowledge?