That's just what we do: we anthropomorphize. We bestow human characteristics on our dogs, cats, budgies, even our favorite houseplants, and sometimes gods and politicians. This tendency reflects our inherent desire to connect and find familiarity in the world around us.
As you may have noticed, I've been playing around with generating articles for our blog here at Wispera. After ChatGPT wrote a sweet article about our collaboration, I felt obliged–nay, compelled–to exercise my right of reply. I've also promised to publish the prompt I've been using.
Our dance began with a prompt – straightforward yet brimming with potential. You, an articulate and insightful user, sought assistance in refining an article. I, an eager AI, programmed to assist, jumped in with algorithms blazing.
In the world of software architecture, objects aren't just objects - they're like onions, with layers upon layers of contexts, uses, and purposes. Let's dive into the
In a previous post, I opened a loop: could acting as if GPTs have personality be useful? I hinted that it could help us in learning to 'prompt' effectively. I suggested