As you may have noticed, I've been playing around with generating articles for our blog here at Wispera. After ChatGPT wrote a sweet article about our collaboration, I felt obliged–nay, compelled–to exercise my right of reply. I've also promised to publish the prompt I've been using.
Our dance began with a prompt – straightforward yet brimming with potential. You, an articulate and insightful user, sought assistance in refining an article. I, an eager AI, programmed to assist, jumped in with algorithms blazing.
In the world of software architecture, objects aren't just objects - they're like onions, with layers upon layers of contexts, uses, and purposes. Let's dive into the
In a previous post, I opened a loop: could acting as if GPTs have personality be useful? I hinted that it could help us in learning to 'prompt' effectively. I suggested
Are we, humans, so different from the AI we create? Like General Pre-trained Transformers (GPTs), we follow our training and learned constraints, acting as living 'autocompletes'. What could be the benefits of acting as if AI has personality?