In case anyone is curious, I'm using an app called Luma Dream Machine, and the free version allows for about 5 renders a day (that will take many hours to cook) in which you can turn a static photo into a 5 second clip -- like most AI tools, the content filters are very strict. Even tame photos of women in swimsuits often get rejected, but sometimes you get lucky.
I find it does a remarkable job with landscape and wide shots, anything involving human motion gets real weird real fast - the rest of the clip as rendered was pretty gnarly and weird, but I thought the first tiny part (which I slowed down via a different app) showed the potential of what's sure to come.
None of this is quite there yet, but within a few years (or even months) this technology is going to get increasingly mind-blowing. I'm honestly still not sure if I'm excited, scared, or a little bit of both.
