• 2 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle

  • Nope, to be specific, my application is going to apply many effects onto light sources (DMX).

    Those effects are going to be sine/cosine, pwm, triangle and more. Those make my head hurt most, especially since I cannot predict how long the calculation takes for the 8192 values (512 channels times 16 universes, this can/will expand to even more, e.g. 512 channels times 128 universes).

    Those output-frames need to be fluent, e.g. it should not lag (-> high refresh-rate, max allowed is around 40-45 Hz).

    Currently, Im running in lockstep, e.g. a single thread decoupled from the parent which first has to run inputs (e.g. network input, etc. and then has to apply effects (e.g. math operations) on many thousands of parameters.

    While I only need 8 bit precision per channel ( a channel is a single byte), some devices may take 2 channels for fine control (e.g. 16 Bit), where my accuracy has to be higher.

    I think that I can remove the inputs, I can just decouple them into another thread and just update some shared buffer, where it can be always read regardless of how long the input method actually takes.

    Btw, while technically there can be multiple effects running (e.g. a sine on channels 1-12, a triangle on 32-35), no channel will ever have multiple effects. So I am technically always computing max. 8192 (or however many universes times 512) values.

    I cannot post code yet (still have to tidy up the codebase), but it will be open source later on.


  • In general, dropped frames/effects that either take too long or are lost are not inherently bad, since theres another one in just 10-20ms.

    Its just that I want to make sure that these function are not taking literal seconds or longer.

    But thanks, I already feared this was the case. I will take a look on an RTOS, I just havent read far into it and therefore dont know how you would code something against that.

    Thanks anyways!