Select Page

Summary: The article discusses the experience of working at OpenAI and living in multiple timeframes simultaneously. It explores the challenges of dealing with cutting-edge technology and solving problems that the rest of the world won’t encounter for several months. The article also delves into how OpenAI manages external communication and keeps future projects under wraps. The author wonders about the communication and feedback systems between teams working on future projects and those working on published projects.

In an interview for ABC published on March 18th, 2023, Sam Altman said that they had the GPT4 model ready 7 months ago, i.e. already in August 2022 (min 13:09). I can’t imagine that they don’t already have the GPT5 or GPT6 model ready, Sam mentions the hypothetical GPT7 (min 8:17), or that they aren’t preparing something new or entirely different that exceeds my imagination. It must feel like time travel when internally solving problems and questions with part of your team that the world (and tech Twitter) will only address as a completely new thing in a few months. And then a few minutes later, discussing with the public and the media the things that they discussed and solved internally a few months ago, but for the public, it’s a hot news and the main message of the day.

Living in the future while working in a similar company is nothing new: Paul Graham wrote about it in startups context ten years ago in his essay. But it must be something completely different to live somewhat in the future, which is also part of the normal reality, like self-driving cars on the streets in testing mode, or living in several different timelines at the same time, that are also interconnected with each other. How many people at OpenAI experience this daily? And do their fingers itch when they read Twitter discussions about things they have already solved internally in the next version, but only they know it so far? It reminds me a bit of Philip K. Dick’s “A Scanner Darkly” or crime stories about undercover detectives – it’s probably not for everyone. Can they talk to their friends about the problems they are solving at work? And how often do they have déjà vu when the same things they talked about 7 months ago are being discussed in public now?

We probably need to appreciate even more how OpenAI manages external communication, and therefore how high the internal company culture must be: I haven’t noticed any leaks about when and what OpenAI will publish or change. I only heard about GPT4 leak a week in advance, and that wasn’t even from OpenAI, but from German Microsoft :-).

Do OpenAI have separate teams that “are in the future” and deal with the future (unpublished models, interfaces), and teams/people who deal with supporting the present, i.e. people using things already published? And conversely, how are these teams interconnected, how is the linkage and feedback system set up between the feedback from the present to the work on the future, which is already happening in parallel? So many questions.

Note to the translation: “in the context of the text above, is it better to use “living in multiple timeframes” or “Living in several timelines”? → “…However, “timeframes” may be more appropriate as it implies a sense of work-related tasks and deadlines, which fits with the article’s focus on the challenges of working with cutting-edge technology and solving problems that the rest of the world won’t encounter for several months. On the other hand, “timelines” may imply a broader sense of historical periods … “living in multiple timeframes” may be more appropriate as it emphasizes the experience of simultaneously dealing with different time periods, which is a central theme in the article. However, if the focus is on the idea of existing in different parallel universes, then “living in several timelines” may be more appropriate.”

Credit: Concept writen by human, text writen by human, translated to English by AI/ChatGPT, summary by AI/ChatGPT edited by human