A s you already know, research into human pose estimation had rapidly accelerated by 2006. The difficulties in calculating the pose a human held – in fact, identifying humans in general – were being quickly understood and in many cases, resolved.
The computing power required to perform these operations was dropping as algorithms became more efficient. However, computer power continued to increase as it always had, making it even more likely that a computer could operate quickly enough to provide real time feedback on the location and position of a 3D human in a 2D image.
It had become quite common for people to carry a portable device around with them at this stage, usually a phone or MP3 player. The processing power of such devices was limited, and if they included cameras, they would be rather low resolution. In fact, it was often quite hard for a human to estimate the pose of anything in a photograph taken by one of these devices!
But that was about to change.

Enter the iPhone

In a world where many items were being used to provide individual features (such as playing MP3s or making calls), phones had gradually progressed to incorporate new features. When the iPhone arrived on the market, it combined multiple features into a single unit.
For the time, it had a relatively fast processor and a good quality camera, and when it launched to the general public in 2007, it was immediately popular. Followed shortly after by numerous Android devices, the smartphone era had truly begun, and gave pose estimation a new outlet – entertainment.
As time progressed, apps began to include features that would take a photograph of your face and allow you to alter the way it looked, and as the software improved this became real-time video enhancement. Nothing drives the development of technology like the need to entertain humans!
While Snapchat, Facebook Messenger, FaceApp, and other apps are now commonplace, the technology behind them has been developed for use in other areas.

The Modern Fashion

Finding clothes that fit can be difficult unless you try them on, and so by 2018 bricks and mortar retailers still held the dominant position in supplying clothing. No matter how good an item of clothing looked on the model in the picture, you knew if you ordered an item of clothing over the internet there would be a strong chance it wouldn’t fit well when it arrived.
Amazon took the quick-fix route and developed Amazon Prime Wardrobe. Users could order an item of clothing and be sent it in multiple sizes, keeping the one they wanted and returning the rest. But there might be a better way.
With an accurate model of the item of clothing paired with an accurate model of the user, it was suddenly possible to engage with a virtual changing room. But with any such technology in its infancy, there were – and still are – hurdles to overcome.
Naked Labs developed a rotating scale and body-scanning mirror that could determine the exact shape and composition of your body, allowing this to become a reality. The technology itself is fantastic, but hardly suitable for home use. Do you have the space to install a rotating scale and mirror?
Nike are implementing a feature in their app to measure your feet, which is significantly easier for the user than installing a complete mirror – but even if “smart” mirrors become commonplace in homes, will we still wonder if the clothes we buy online will fit well? Or even if they will suit us?
Is it all science fiction, or is it about to become a (virtual) reality?

The Future is Coming

With machine learning and AI being the current buzzwords, the future certainly looks exciting. We are on the cusp of something quite amazing, and the seeds are already planted.
How will it work? You’ll find out in the final part of this three-part series!

reformreality

Author reformreality

More posts by reformreality

Leave a Reply