Open-Source Motion Capture: A Frugal Education Project

As I mentioned in my previous post, my experiments in AI collaboration for project development have been very successful so far. The success of this initial experiment has inspired a larger follow-on project that builds on the collaboration approach, this time taking it much further with a view to build a full-featured open-source motion capture system. This is no small undertaking, and I will no doubt have my work cut out, as there are many facets to a project of this size. However, it has the potential to provide a rich source of interdisciplinary collaboration opportunities for students to work on an exciting open-source project such as this.

So what’s the plan exactly?

Well, the plan is to build motion capture suit hardware and wirelessly pair it with a desktop application that can visualise and record data from the suit. This will be achieved using a combination of available materials and resources, a small budget for custom sensors and microcontrollers, and the capabilities of freely available LLM chat bots, such as ChatGPT and others, to aid in the development process. More on that later.

This project will leverage available resources in a variety of ways, taking advantage of:

  • In-house equipment and expertise from across my institution;
  • Off-the-shelf components and clothing, used to assemble the hardware;
  • Recycled and repurposed materials including electronics, plastics, and cabling;
  • Access to freely available LLMs for collaboration in the design and development of the custom software platform;
  • An open-source repository to archive and share the platforms underlying code.

Financial Bootstrapping

In order to get a proof of concept off the ground, I have some surplus from a small amount of money I managed to secure for the initial motion capture experiment before Christmas, combined with around £20 from a previous project. I’ll be able to purchase some of the essential hardware needed for testing, which will allow me to construct a minimum viable prototype to confirm whether the hardware configuration I’m planning to use will be capable of scaling up to meet the performance demands of the full suit. If and when the proof of concept is successful, I’ll be ready to scale up the hardware and construct the full suit. In the meantime I will be able to begin work on the software platform, initially in MacOS to begin with due to my experience developing in the Apple’s Swift IDE, and mainly due to the fact that I’m a Mac user. I’ll be collaborating with a developer for the Windows port when the time comes.

To help me develop the software I’ll be collaborating with a large language model. My pilot project was very successful, and with even more feature and capabilities rolling out since my initial attempts, I’m very optimistic that this will be successful. I’ll write about my experiences using ChatGPT as a virtual collaborator in a later post, but in short, it’s an an incredible coding tool that you can very effectively benefit from if you use the free access in a smart way – ie, making the most of the premium model that is provided in small helpings in a kind of freemium way. If you game it a bit you can get a lot done with the more advanced models while making do with the more basic models in between. However, there’s a new kid on the block by the name of DeepSeek that kind of changes everything…

Now that I have a small budget I can get the initial equipment on order and merge this with some existing hardware. In the meantime, me and my AI helpers can set about building a basic testing app for the hardware to speak to.

I’ll be posting regular updates as I go, so hopefully you see another post in the not too distant future…