George Rebane
Argentinaโs president Javier Milei delivered an important speech at the recent World Economic Forum in Davos, Switzerland. In it he described the history of Argentinaโs tragic descent from being one of the worldโs leading developed countries, enjoying the fruits of free market capitalism, to a nation today with a destitute and chaotic economy after it began reorganizing itself under various forms of collectivism centered on socialism. The speech is an excellent and well-delivered warning to that annual conclave of true-believer globalists. You can view it here in its entirety โ I recommend it.
But perhaps the equally or even more important aspect of the speech was the mode of its delivery. You see, Milei delivered the speech in Spanish to a live audience who wore ear pieces to get a realtime English translation by a human translator. Yet in the above video of the speech Milei delivered it in perfectly accented English in his own voice. The video was generated offline by an LLM AI that correctly coded his voice and modified the video image to perfectly synchronize his lips to express the English delivery. (Here is the follow-up to illustrate the human-translated version of the video.)
The astute reader will instantly understand that this speaker could have been made to deliver any kind of speech desired at that WEF venue. Such translations will soon become realtime in the sense that the actual speaker speaking in language L3 delivering message M7 will be seen and heard on monitors and/or live broadcasts to speak in language L12 delivering message M5. For the latter we will need the source file to contain metadata on facial expressions and, perhaps, gestures etc โ all in the realm of todayโs offline large language AI models.
For good or ill, in these pre-Singularity years this is what we now have to look forward to from here on. And this election season will no doubt provide us with examples of both.


Leave a comment