Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

"A.I." (Machine Learning Algorithms) To Generate Art

1151618202123

Comments

  • @Carnbot said:

    @Krupa said:

    @Carnbot said:

    @Krupa said:

    @Carnbot said:

    @Krupa said:

    @Carnbot said:

    @Krupa said:
    It’ll be them using the same prompts; the models have practically all of art history baked in so there’s really no need for everything to look the same…

    "all of art history" is not a good way to describe it though. These models are restricted and limited and have lots of biases towards certain things.

    Sure thing, all is stretching it, but I’m not using any contemporary or even recent references in my prompts and that’s reflected in the results I’m getting - as they say, garbage in, garbage out 👍

    Yes, and I think the purity of a process is what can make art work interesting. The problem is these models are curated by organisations and what it thinks should be included by huge data scraping. This will always create huge bias and generic results. There's a lot of garbage in there which can't be totally removed unless it's not in there in the first place. The average user won't mind, but the best artists will always want more control of the material.

    Indeed, once I’ve figured it out I’ll be creating my own models from the trends of thousands of photos I’ve taken over the years, and selecting paintings from the areas that interest me - in one way I’m just looking for really good style transfer. The new project I posted above though I’m gonna let it go full bore collective memory as that relates precisely to the content of the work…

    Yeah, I've been exploring doing some of that, training on my own source and used this process as part of a recent work I did in Czech Republic. But this process will be better when GPUs are even faster so we won't need the online models at all. I still don't like the latent flavour it leaves behind.

    That sounds interesting, did you use dreambooth, or some other method? Did you already post links to the work as well?

    I haven't posted links or images here yet no, but will do. I used the standard training in stable diffusion webgui mainly because it's fast to use and train. But I did compose it by combining it with more procedurally based animation and compositing, because I don't like using it without that, I also used AI as part of the subject of the project so it made sense to use it. But it is interesting to use and I'm sure I will continue to use it as a process, but not always.

    Definitely sounds cool, I’ll look forward to seeing that!

  • AI generating spectrograms which in turn becomes music: https://www.riffusion.com/about

  • @FastGhost said:
    AI generating spectrograms which in turn becomes music: https://www.riffusion.com/about

    I think that example starts to point toward where audio production is headed. Since all audio is represented as parts of a spectrum, every element should ultimately be replicable, replaceable and able to be identified and manipulated with the assistance of machine learning. Even after a final mix it should be possible to extract or replace literally any part of the mix.

  • edited December 2022

    Neato!

  • I have to be honest; this shit concerns me (a lot!).
    I have just broken away from my mundane IT support job (due to both the job, and other political reasons in our department) and I want to concentrate on working on my 3d art.

    I’m 41, and I’ve had enough of what I’m doing, I figured it’s time to start doing what I will enjoy and can be proud of.

    I was well aware of the concept of art not being so lucrative unfortunately… but at this point in my life, I felt like it was what I wanted to do.

    Over on the 3d modelling forums I belong to, ppl are already starting to worry about the threat of nvidia producing 3d models from text input. And it worries me too.

    I produce buildings in 3d. Not. Exactly incredible but I’m hoping to improve..
    Www.Sketchfab.com/SkillipEvolver

  • @SkillipEvolver said:
    I have to be honest; this shit concerns me (a lot!).
    I have just broken away from my mundane IT support job (due to both the job, and other political reasons in our department) and I want to concentrate on working on my 3d art.

    I’m 41, and I’ve had enough of what I’m doing, I figured it’s time to start doing what I will enjoy and can be proud of.

    I was well aware of the concept of art not being so lucrative unfortunately… but at this point in my life, I felt like it was what I wanted to do.

    Over on the 3d modelling forums I belong to, ppl are already starting to worry about the threat of nvidia producing 3d models from text input. And it worries me too.

    I produce buildings in 3d. Not. Exactly incredible but I’m hoping to improve..
    Www.Sketchfab.com/SkillipEvolver

    It’s already happening and it’s going to get better by becoming faster and easier. You should pursue the career you want, but always be aware of the current state of the art in the marketplace.

  • @Svetlovska said:
    I have mostly been using an iPad app called Dream:

    https://apps.apple.com/gb/app/dream-by-wombo-ai-art-tool/id1586366816

    to create the illustrations for a book of slightly kinky micro horror fictions I am putting together. Here is an example:

    And just to prove I am equal opportunities in these matters:

    Interesting! I wonder if you'll produce any Ambient to go with those and create a multimedia experience/installation of some sort. :)

    I've played with Dream and Wonder. Got a lifetime license for Wonder and pay a monthly subscription for Dream. I like Wonder more to be honest, but both are fun to mess around with.

  • @FastGhost said:
    AI generating spectrograms which in turn becomes music: https://www.riffusion.com/about

    This piques my interest. There are apps that show spectrograms….hmmm…thinking…

  • @NeuM said:

    @SkillipEvolver said:
    I have to be honest; this shit concerns me (a lot!).
    I have just broken away from my mundane IT support job (due to both the job, and other political reasons in our department) and I want to concentrate on working on my 3d art.

    I’m 41, and I’ve had enough of what I’m doing, I figured it’s time to start doing what I will enjoy and can be proud of.

    I was well aware of the concept of art not being so lucrative unfortunately… but at this point in my life, I felt like it was what I wanted to do.

    Over on the 3d modelling forums I belong to, ppl are already starting to worry about the threat of nvidia producing 3d models from text input. And it worries me too.

    I produce buildings in 3d. Not. Exactly incredible but I’m hoping to improve..
    Www.Sketchfab.com/SkillipEvolver

    It’s already happening and it’s going to get better by becoming faster and easier. You should pursue the career you want, but always be aware of the current state of the art in the marketplace.

    Of course

  • edited January 2023

    .

  • edited January 2023

    Normally I avoid forwarding these vids but these folks are smart, creative, legit and blah blah

  • edited January 2023

    Just checked out that riffusion thing referenced above. The creators article opens with a couple of ‘meh’ examples of what it does, but further down the page, when they are demonstrating smooth interpolations from one sound to another… it gets seriously impressive.

    One example, where the spectrogram of audio of someone typing gradually morphs into a jazz piece is particularly impressive.

    It seems to me as an untechnical punter that this is a very different deployment of AI than the previous AI music tools which create rules-based generic (bland) library music. This is more akin to an audio version of those AI facial morphs you’ve probably already seen where Joe Schmo smoothly and imperceptibly turns into Tom Cruise.

    I already want a web interface where I can upload one of my own audio samples and a text prompt to have it turn into, I dunno, ‘Lovecraftian monster slowly rising from the ocean’. Being able to spec key, tempo, and length…

    It is all getting very… cool?

    Now I’m jonesing for one of our superstar devs here to build a Dream by Wombo style iPad gateway to make it so. The riffusion guys seem to have put links to their code out there on GitHub, so it could happen. I’d pay good cash for that…

  • @Ben said:

    @ErrkaPetti said:
    Yet another example of image enhancer by AI - a closeup of me taken 1965…


    That is impressive. What we can see of your dad looks pretty good too.

    Looks great what is the IAP cost please?

  • Had another good session after 100+ generations of renderings:

  • Have a great Sunday everyone:

  • I don’t know why but this whole AI generated art reminds me a lot of my neighbour’s astroturf.

  • @supadom said:
    I don’t know why but this whole AI generated art reminds me a lot of my neighbour’s astroturf.

    Sadly, we sleep walk into this with very little thought of the consequences, almost as if the Corporatocracy says if we don’t do it our opposition will, but worry not, we could always drop-out, into the Zuckerverse.

  • edited February 2023

    @supadom said:
    I don’t know why but this whole AI generated art reminds me a lot of my neighbour’s astroturf.

    Imho it’s no different than Sampling in Music. It all depends on how you use the output it gives you. If you just rely on the initial image then yes, it is like astroturf…but if you slice, dice, resample and build upon parts and pieces of the output you create an entirely new landscape to enjoy. I’m taking this opportunity to get into my Robert Rauschenberg mode with all of these tools.

    That’s my view of it after thinking about it over the last few months. The long view is that for visual artists who want to use the system to iterate on their preferred styles and motifs it is an amazing tool and this is the lens i am seeing it through. I see it in the same light as i see Riffer, Fugue Machine, PlayBeat3, Scaler 2—a tool to push me in to new places based on my intention.

    They need to work out the legality of it all though…but in the meantime, it’s a vast Sample Crate i am enjoying to create visuals with when inspiration strikes. I’m using it to generate a ton of textures and elements i can use for years. 😉

  • Anybody here selling their creations? and what platforms? Giclee prints? or downloads?
    Be interested to know where providing actual prnts or downloads is best?

  • Just like audio sampling one day they will be able to trace the original content the AI scanned and copyright claim the F%^k out of everyone. Be prepared.

  • I think this video is a useful primer and introduction for anyone to how AI images are made and what they represent. Referring to the kind of data visualisation that AI images are. An infographic of the dataset, a map which reveals the connections in the data, in this case of these text to image models, datasets of images. (and he compares to the famous cholera data map by John Snow which started this journey). When we get to explore this technology in realtime, only possible now on very high end computers, you will see this map visualisation more clearly. But it's a good intro for anyone who wants to learn to use AI as an artist because knowing how it works (and what it is designed to do) is important in creating work with it, rather than just using it as a magic "black box" you have no control over.

  • edited February 2023

    @Danny_Mammy said:
    Just like audio sampling one day they will be able to trace the original content the AI scanned and copyright claim the F%^k out of everyone. Be prepared.

    Diffusion rendering is nothing like audio sampling. Not to say that it has totally ethical roots of course, and while there are a few rare cases of overfitting in even the best ML models (where you can tell the source material) for the most part in 99.9999999% of diffusion renderings things are completely, effectively laundered into oblivion.

  • edited February 2023

    @Carnbot said:
    When we get to explore this technology in realtime, only possible now on very high end computers, you will see this map visualisation more clearly.

    You don't even have to render in realtime to get a clear sense of that, just rendering sequences that blend between datapoints illustrates it well too.

  • edited February 2023

    @AudioGus said:

    @Danny_Mammy said:
    Just like audio sampling one day they will be able to trace the original content the AI scanned and copyright claim the F%^k out of everyone. Be prepared.

    Diffusion rendering is nothing like audio sampling. Not to say that it has totally ethical roots of course, and while there are a few rare cases of overfitting in even the best ML models (where you can tell the source material) for the most part in 99.99999% of diffusion renderings things are completely, effectively laundered into oblivion.

    That's not strictly true, the data is in the model and can be replicated and reconstructed quite accurately but it's never asked to usually or it happens accidentally, since you have very little control over the data with text to image/audio apps. So while it's not exactly like traditional sampling it's a different type of sampling, eg data sampling, but it's closer to it than not.

  • edited February 2023

    @Carnbot said:

    @AudioGus said:

    @Danny_Mammy said:
    Just like audio sampling one day they will be able to trace the original content the AI scanned and copyright claim the F%^k out of everyone. Be prepared.

    Diffusion rendering is nothing like audio sampling. Not to say that it has totally ethical roots of course, and while there are a few rare cases of overfitting in even the best ML models (where you can tell the source material) for the most part in 99.99999% of diffusion renderings things are completely, effectively laundered into oblivion.

    That's not strictly true, the data is in the model and can be replicated and reconstructed quite accurately but it's never asked to usually or it happens accidentally, since you have very little control over the data with text to image/audio apps. So while it's not exactly like traditional sampling it's a different type of sampling, eg data sampling, but it's closer to it than not.

    The images cannot be replicated and reconstructed unless they are overfitted, for example with many extraneous duplicates present in the original dataset when the model was trained. In the case of Stable Diffusion it was trained on billions of images and the model is only 4gigs. There is no way you can replicate and reconstruct those billions of images from a 4 gig file. There are certainly some examples of overfitting there but the vast overwhelming majority are vapor.

  • @Carnbot said:

    @AudioGus said:

    @Danny_Mammy said:
    Just like audio sampling one day they will be able to trace the original content the AI scanned and copyright claim the F%^k out of everyone. Be prepared.

    Diffusion rendering is nothing like audio sampling. Not to say that it has totally ethical roots of course, and while there are a few rare cases of overfitting in even the best ML models (where you can tell the source material) for the most part in 99.99999% of diffusion renderings things are completely, effectively laundered into oblivion.

    That's not strictly true, the data is in the model and can be replicated and reconstructed quite accurately but it's never asked to usually or it happens accidentally, since you have very little control over the data with text to image/audio apps. So while it's not exactly like traditional sampling it's a different type of sampling, eg data sampling, but it's closer to it than not.

    True, as to the legality, we’ll that’s being debated.

  • @Krupa said:
    Lovely stuff @echoopera , very polished. I’m still running through my animatic with stable diffusion. I put out an Instagram version with a before and after last week and most people actually preferred the unprocessed version…

    https://www.instagram.com/reel/ClUPD93NPZv/?igshid=YmMyMTA2M2Y=

    Wow your animation is brilliant!

    Sorry I can’t seem to find the link any chance you can post it here, thanks

  • @Toastedghost said:

    @Krupa said:
    Lovely stuff @echoopera , very polished. I’m still running through my animatic with stable diffusion. I put out an Instagram version with a before and after last week and most people actually preferred the unprocessed version…

    https://www.instagram.com/reel/ClUPD93NPZv/?igshid=YmMyMTA2M2Y=

    Wow your animation is brilliant!

    Sorry I can’t seem to find the link any chance you can post it here, thanks

    That link in there is the comparison one - if it works for you, the left panel is the unprocessed version, the right is interpreted by stable diff…

    thank you for checking it out and your kind words, I’ve been unable to do much on it this last month but I’ve made a week plan over the last few days so I should be getting into it again (alongside the 7 other projects I’ve somehow assigned myself 😅😂)

Sign In or Register to comment.