smooth live plot updates
daltonb last edited by daltonb
Let's say I want to visualize a function changing in real-time. What would be the best way to smoothly update a plot with an entirely new dataset at 60 FPS?
With ui.Path I don't see a way to update the whole path at once.. if I try to iterate through the dataset with line_to() on each update it's pretty laggy.
Just to make this concrete, in particular I'd like to visualize a damped sine wave (https://en.wikipedia.org/wiki/Damped_sine_wave) with a given window around the y-axis, feeding in new amplitudes in response to user input, and updating 't' as a function of time.
Thanks for any help!
mikael last edited by mikael
@daltonb, can you share the Python code that calculates the function? How many X-axis values are you plotting? Have you already tried reducing them to find a balance between performance and appearance?
mikael last edited by
@daltonb, another idea is to use the canvas module, specifically in order to reduce points plotted but using bezier curves to make it smooth (and begin/end updates).
First, are you using numpy array to create the data? That will be orders of magnitude faster than computing each point.
Next, make sure you are only line_to-ing the pixels you need -- i.e you might decimate the numpy array to match screen resolution.
See this thread for the fastest techniques we have found for displaying a 2d numpy array (image)
Taking your numpy array and converting it to a 2d array is not too hard, though that shows more of an array of dots, than connected lines. I can post an example later.
There may be approaches using bezier curves instead of lineto points to reduce the density and thus increase speed.
Also, check out https://github.com/jsbain/backend_pythonista
If you are just looking for real-time updates as a "preview", this has some features to show a low resolution jpg preview at high rate, then switches to a high res when user interaction pauses.
I'm thinking there may be a way to combined these ideas-- having matplotlob draw to a canvas, which is then displayed using the IOSurfaceWrapper for speed.
If your are ambitious, you could try implementing https://github.com/eldade/ios_metal_bezier_renderer which will be the fastest method, but will be tricky to implement in pythonista.
I did implement a glsl shader a while back for this sort of thing:
Which perhaps could be adapted.
ok, uibezierpath and shapelayer is very snappy.
this implements a touch responsve plot, where amplitude is set by vertical position, and tau is set by lateral position.
i get 50 fps or more on my crappy ipad
Awesome, thanks so much! I probably won’t get a chance to play around with these till tomorrow but will update when I do
@JonB that was super helpful.. would have taken me days to write that script even with all the hints. I made a little mic indicator for voice recognition- here’s progress so far! (Also, for me right now it runs perfectly half the time, but I’m curious if you get random segfaults and/or index out of bounds errors on some runs.. haven’t figured out a cause or a pattern yet)
import ui import numpy as np import sound import time from objc_util import * CAShapeLayer=ObjCClass('CAShapeLayer') W=225 f=25 tau=0.035 scroll=0.4 voice_thresh = 25 voice_scale = 10 voice_max = 200 N=1024 t=np.linspace(-0.5,0.5,N) pingpong = 10.0 class micView(ui.View): def __init__(self, *args, **kwargs): ui.View.__init__(self, *args, **kwargs) self.bg_color='white' L=CAShapeLayer.alloc().init() L.strokeColor=UIColor.grayColor().CGColor() L.fillColor=UIColor.clearColor().CGColor() L.lineWidth=2 self.objc_instance.layer().addSublayer_(L) L.setNeedsDisplay() self.L=L self.A=0 self.r_N = 2 self.r=[sound.Recorder('r'+str(i)) for i in range(self.r_N)] [r.meters for r in self.r] self.r_i = 0 self.r_active = self.r[self.r_i] self.r_active.record() self.t=time.perf_counter() self.r_t = self.t def next_recorder(self): self.r_active.stop() self.r_i = (self.r_i+1)%self.r_N self.r_active = self.r[self.r_i] self.r_active.record() self.r_t = self.t def update(self): if self.r_active: self.A=min(voice_scale*max(0, (voice_thresh+max(self.r_active.meters['average']))), voice_max) p=ui.Path() if self.A: y=self.A*(.01+np.sin(2*np.pi*(t**2)))*np.cos(2*np.pi*f*(t-scroll*self.t))*np.exp(-(t**2)/tau) + self.height/2 y_offset = self.height/2 - 100 x_offset = (self.width-W)/2 p.move_to(x_offset,y+y_offset) for ti,yi in zip(x_offset+(t-t)/(t[-1]-t)*W,y+y_offset): p.line_to(ti,yi) self.L.path=p.objc_instance.CGPath() self.L.setNeedsDisplay() self.t=time.perf_counter() if (self.t-self.r_t) > pingpong: self.next_recorder() def will_close(self): [r.stop() for r in self.r] # necessary to free these or one recorder never stops.. not sure why self.r_active = None self.r = None v=micView() v.update_interval=1/60 v.present()
mikael last edited by
@daltonb, experiment with liberal sprinkling of
@mikael Magic- I think that worked! It seemed to be running more smoothly but than I got those errors again.. though on further trial-and-error I think it was due to another script I ran which had similar issues. Maybe the other script left behind some thready detritus on close?
thats a really cool effect!
A few thoughts:
- if you are using the streaming sfspeechrecognizer that was posted a few days back, then it will be possible to use the audioengine monitors directly, no need to use the pingpong record.
- if you are just drawing 1024 points, try using regular draw() and stroke, instead of the layer method:
def update(self): self.set_needs_display() def draw(self): if self.r_active: self.A=min(voice_scale*max(0, (voice_thresh+max(self.r_active.meters['average']))), voice_max) p=ui.Path() if self.A: y=self.A*(.01+np.sin(2*np.pi*(t**2)))*np.cos(2*np.pi*f*(t-scroll*self.t))*np.exp(-(t**2)/tau) + self.height/2 y_offset = self.height/2 - 100 x_offset = (self.width-W)/2 p.move_to(x_offset,y+y_offset) for ti,yi in zip(x_offset+(t-t)/(t[-1]-t)*W,y+y_offset): p.line_to(ti,yi) p.stroke() self.t=time.perf_counter() if (self.t-self.r_t) > pingpong: self.next_recorder()
(and get rid of all the self.L business in init). this way might be slightly less efficient, but avoids the whole objc stuff which can be unreliable in tight loops, or when running other scripts.
Thanks @JonB, I do like how it turned out! (and any suggestions on the effect are welcome). Nice chance to brush up on my math a bit. I grabbed the offset and the squared exponent in the "envelope function" from this excellent Quora exchange: https://www.quora.com/What-function-is-Arctic-Monkeys-album-cover
Yessir, that's my next goal... for me it was easier to get this MVP working before messing with objective C code. It will be nicer that way for sure though.. for starters I've had a hard time getting files to close cleanly with threaded code. Just a heads up the issue affects your ping pong example from a while back as well.. at least for me after running it, whichever .m4a file was last active keeps growing after exit.
Oof.. I didn't know the objc interop was inherently unreliable.. that's a bit sad to hear. Do you know any details about why or specifically what scenarios? I love working in Python, but this may end up being my gateway drug to actual iOS development, ha. Thanks for the example code!
@JonB just to double-check, do I need to roll my own audio filtering/metering algorithm if I want to use
SFSpeechRecognizer? (e.g. https://stackoverflow.com/questions/30641439/level-metering-with-avaudioengine). From my quick research it seems the metering algorithm is only provided with
AVAudioRecorder, which I couldn't figure out how to make compatible with
SFSpeechRecognizersince (according to my rudimentary understanding) they both try to install a tap on the mic input node. As a quick check I tried using a sound.Recorder() along with the
SFSpeechRecognizerimplementation and got this crash:
com.apple.coreaudio.avfaudio: required condition is false: IsFormatSampleRateAndChannelCountValid(format)
(which is informing my interpretation that both are trying to tap the mic input.. not that I necessarily understand what that means). Thanks for any commentary!
My iPad is charging, but I'll post an example using audio engine and animations. There are some built in monitors on the underlying audio unit I think.
My initial attempt seemed to have 0.375 sec latency, but I think I've figured out how to request fewer samples at a time.