Core Audio support? Split view in editor?
I am new to Pythonista and so am still in the discovery stage, so please forgive me if these things have been proven impossible already. Coding on my iPad is so exciting I stay up way too late for the old person that I am now! I am primarily an artist--albeit a geeky one--and one very very very very dear wish of mine is to be able to code digital audio experiments on iOS. Thus my first question: are there any plans or even tiny sparks of ideas around Core Audio support or an API or a module that can send any (normalized) data stream to audio out?
Including PIL was a fabulous move; real audio programming would be even fabulouser. I am one of those people who thinks that Apple's compulsive shacklng of iOS is nearly criminal. Given the computing power I am holding here in my left hand, I should be able to make huge complicated messes of noise and light without waiting for an app developer to provide access to basic tools that have been available for weeks/months/years, but here we are. Real time audio synthesis? Pretty please?
My other request is a little less grand: a split view in the editor. I am self-taught in the digital arts and I learn best by typing in examples I find elsenet. Copy/paste is easy, but it does not teach me much. Since iOS is a one-view-at-a-time sort of computer, it can get a little clicky-finger tiring to work from examples--or reference docs of any sort.
Otherwise, I am so happy with Pythonista I could just spit! Thank you mister omz.
A few things might be of interest:
Pythonista drum machine: https://gist.github.com/omz/4034439
Also, it is possible, if you keep the sample rate low enough, and using numpy, to create sounds in near real time. See https://gist.github.com/jsbain/d81f771068229f2e0f9e for example.
If you didn't want to cache sounds (say, using a temporary .wav file), you would use the undocumented
sound.Playerobject instead of
sound.play_effect. See https://omz-forums.appspot.com/pythonista/post/5806179053731840
Thanks for the links, Jon. Yes I am trying to create waveforms from scratch (or numpy, mainly..) and send them to someAudioInterface where I can hear them. The file-writing part was what I was trying to avoid but--and this is my ignorance showing--perhaps an audio file is going to have to act as the buffer I imagine I would need between signal creation and output in any case. I would like to see examples of writing audio buffers to memory (rather than a file) at least for when I am working on my iMac, but I am not terribly sure where to start to look. And although this strays off-topic for Pythonista forums, I will just add that I have only begun looking at pyaudio / portaudio as possible tools that could save me from having to learn anything about Core Audio in order to do anything like that and so I am open to similarly off-topic hints. :)
wradcliffe last edited by
You should search the forum for CoreAudio and CoreMIDI to find out about the many issues surrounding support for these API's in Pythonista. The short story is that supporting these API's forces Pythonista to allow background processing which is at odds with it's ability to run "scripts". Apple views this as a very big issue since it seems to allow people to write scripts that could drain away the battery in the background. It is also not the main focus of Pythonista.
That is an interesting question. I know Pythonista is not focused on this sort of thing but it is the only Python implementation for iOS that gets anywhere close. Even with a (sshhhh!) jailbroken device, there is not much else available for, say, a sort of geeky artist who is trying to investigate the aesthetic dimensions of coding sound and image as close to the metal as they can get with the tools they already have. Which would be me and probably a few other people. Generally speaking, the Apple hacker avant-garde is not particularly interested in the arts as far as I can tell, and I don't have the time or energy to become one of them myself. So I come to places like this to beg a little. :)
I use Processing and Python on my iMac for most of this stuff unless what I am doing is easier in PureData. Python is the only language I feel at all proficient in; I have a little C foundational knowledge but not enough to take my jailbroken iThings and do anything interesting with them. I am not even sure if enough compiler/linker tools have been ported to it yet and/or if anyone is going to do any more than what has been done.
But I still like Python better!
Out of curiosity, can you give a more specific example of what you are trying to do?
For instance, are you starting with a pre-loaded audio sample, and applying different filters that the user interacts with? (if so, "near real time" seems adequate).
Is the user interacting with some object to generate sounds -- are those sounds completely arbitrary, or can you constrain them to a "library" (finite number of notes and voices)? If so, see polymerchm's chordcalc, which plays notes that were pre-generated.
A viable third option: if you don't mind a slight pulsing, it is entirely possibly to have a Thread which plays sound in say 0.5 second chunks. Once the play is kicked off, you can generate the next chunk, to minimize the gap. in numpy, generating and writing a single sine tone for 0.5 second at 8000 Hz sample rate, takes a few msec.
There seems to be a slight click at the interval, most likely because the Thread timing is not exact.
But this lets you have interactive ui (I can post a simple realtime adjustable tone generator example)
I would like to see the tone generator example, if you don't mind.
I am not always sure what I am trying to do! Generally, I have been bouncing around the edges of algorithmic composition, glitch or bytebeat (image and sound), and iterative functions/fractals (image so far; leaning toward creating sound). So far there is no user to speak of. Interactivity is something I want to add later but at the moment everything is hard-coded without command line options. On the audio side, I am much more interested in spectral/timbral evolution than in scale-based composition, so my desired output tends toward shifting complex waveforms, which of course can be produced with several sine oscillators, but I want to shape the waveforms value by value in this particular somewhat unfocused project.
More specifically, right now I am wanting to create data streams programmatically from scratch and turn them into sound in real time. I have played with visual cellular automata and with iterated functions in Processing and would like to do analogical somethings with sound. My current bright idea is to create a very rudimentary and gui-less version of GlitchMachine and/or BitWiz and then see what I can use it with besides the short C-style bitshifting functions that they are built to play.
Here are a couple of gists showing what I have done in Processing with visualizing iterated functions: Iterator.pde, iteratefx002.pde. These are commented mainly for myself so if something is unclear or just strange I can try to explain. And here is a code snippet that very briefly investigates creating short arrays of data in Python. This was just an experiment to see what might happen if I threw arbitrary list permutations together. I have made longer ones that write the raw bytes to file and then I play them using SoX on my Mac. So far with mostly noisy results. :) Oh and also before I knew anything about Numpy, so very slow to create files long enough to hear!
Ok, expanded the simple tone generator a bit into a more fun experiment. A theremin type instrument. Tones are generated in realtime, based on multitouch finger locations. volume is controlled by horizontal axis, tone by vertical. In retrospect one could use the
motionlibrary for a more realistic experience, or perhaps introduce other modulation effects, etc.
The tricky part is keeping the gaps to a minimum. The audio thread generates a short section of the wave every 0.1 seconds. I tried to ensure the wave looks continuous by adjusting that time to ensure an integer number of cycles, and tweaking the sleep time, but that didn't really make a difference, there are glitches at the
dtinterval. Tweaking this may give more pleasing results on your device.
This implementation uses an audio thread for each finger, but this makes it lag when many fingers are present. Instead, there should be one audio thread, which takes multiple frequency inputs .
The sound.Player stops playing when the file it was reading gets overwritten, so I had to go to a ping pong buffer system. I have two open wav files, which I switch back and forth between, and have the player playing the file that I already wrote to.
tempfiles, so clean up when the file closes when the thread stops.
Awesome! Thank you. I just skimmed through it and I already see some tricks I did not know about. And also a couple of places where I might add a little chaos. I have not yet tried to do any multithreading; nice time to start learning, even if only enough for now to turn it off. Eventually maybe a reason will appear to turn it back on.
I'll be back in a couple days if I manage to make anything interesting.
Sorry, I was missing several imports. New version has been posted, which now also uses a single audio gen thread, so it supports as many fingers as you have.