Posted on

Quickly Create Music Videos Using ffmpeg

I’m going to disclose the command line that I use to generate a simple music video from an audio file and an image. The video will contain the original audio and an XY scope visualization of the audio superimposed over a custom background. The process depends on a few things:

  • ffmpeg – search the internet for download and installation instructions suitable for your platform
  • an image file to serve as a background. To use the command below, you’ll want to make a 1920×1080 image
  • an audio file (of course!)

The following are four separate commands for the Windows command line. Other platforms will be slightly different. You’ll want to change the first three to refer to your own filenames. (You can guess the role of file in the process). The last command is doing the heavy lifting and is a single, really long, command. If you’re not familiar with using a command line interface (typing in commands to a big blank window) you’ll want to know how to copy-paste to a command window and, note that every character is essential.

SET IMAGEFILE="test.png"
SET AUDIOFILE="test1.wav"
SET OUTFILE="testout.mp4"

ffmpeg -loop 1 -i %IMAGEFILE%  -i %AUDIOFILE%  -filter_complex "[1:a]avectorscope=s=1920x1080:draw=line:mode=lissajous_xy:rc=100:gc=120:bc=255:rf=9:gf=9:bf=9,format=yuv420p[v],[v]split[m][a];[m][a]alphamerge[keyed];[0][keyed]overlay=eof_action=endall" %OUTFILE%

That’s it! I hope it works for you.

Posted on

Seven Things I’ve Learned About FM Synthesis

If you like math, are interested in synthesis and need to write a paper, maybe this article will be of interest to you. I’m sharing a paper from a long time ago that documents my personal research in to the topic of FM synthesis, with CSound’s foscil opcode and a little math.

The references at the end of the paper might be reason enough to download.

Posted on

Release of the Marimba SFZ Instrument

I’m very excited to announce the Big Winky Media Marimba SFZ Instrument is available for purchase.

The instrument is the result of hours of reviewing, packaging and testing dozens of high quality samples from an an-echoic chamber at the University of Iowa. The cord-wound and yarn-wound mallet instruments feature three velocity layers each. A rubber mallet instrument is also included.

Some demonstrations are available below. The demos themselves are provided under the Creative Commons Attribution-ShareAlike license.

The first demo, More Linear by Dan Liszewski, uses the yarn-wound mallet samples.

The next demo, Shifting Paradigm, also by Dan Liszewski, features the yarn mallets as well.

if you don’t want to pay five bucks, check out the Free Marimba (rubber mallet only) soundfont.

Posted on

Ancient College Paper Predicts SoundCloud, Spotify

I think it was 1982, I wrote a paper for a course on technology and society. One of my sources was Alvin Toffler’s The Third Wave. The book makes a number of predictions for a post industrial world and was the primary source for my conclusion. (This scan shows a draft printed on continuous feed fan-fold “computer paper” by a line printer capable of only upper case letters! You can even see the holes on the left edge where the printer feed mechanism engaged the paper) Reading this 37 years later, it seems to describe the Internet, Spotify and SoundCloud!

The conclusion of a paper I wrote in 1982 seems to forecast music streaming services and self service audio distribution platforms.

The media will become “de-massified’ according to Alvin Toffler in post industrial society. No longer will single radio stations broadcast to thousands of anonymous listeners hungry for entertainment. Developments like cable communications will personalize listening even more than recordings have. Instead of collecting vinyl disks, listeners will be able to dial up select recordings through their cable system from a central library according to their own tastes. It is even possible that contributions to the central libraries may be made by members of the community.

Daniel Liszewski 1982
Posted on

MIDI Time Math…Again

Time, MIDI, Music and Math

Originally published: 2007-05-28 04:57:55
Original post URL:

I am getting tired of deriving formulas for converting various MIDI data to real time. At the moment I’m wanting to make a standard MIDI file (SMF) to CSound .sco conversion utility. (Yes, I know there already is one) The most significant part of the task is to come up with some code that takes the time signature, tempo, and timing data and converts it to clock time (in seconds)

So here we go, with an attempt of memorializing this once and for all!

MIDI Division is the number of delta-times per quarter note. I’m calling the units of division [ticks/quarter-note]

MIDI Tempo is specified in [microsecond/quarter-note]

A MIDI quarter-note is 24 MIDI-clocks. (clocks are not ticks. Clocks are not delta-times)

Some quantities comprising a MIDI time signature (we don’t need this for the problem at hand but, while I’m documenting this stuff, I’ll include it): numerator (beats-per-bar), denominator – as a negative power of two, number of MIDI-clocks per metronome click [MIDI-clocks/beat] and number of 32nd notes per MIDI quarter (this last number I’m assuming is always constant equal to eight.

So, to convert a quantity of MIDI delta-times to seconds:

(numDeltaTimes[ticks] / division [ticks/quarter-note]) * tempo[microseconds/quarternote]*(1[second]/10^6[microseconds])

or with out all my ‘unit’ notation:

numTicks * tempo / (division * 10^6)

where numTicks, tempo and division are the raw quantities from the events and header of the MIDI file.

Posted on Leave a comment

Happy New Year!

2019 is off to a good start with the return of to cyberspace! During the downtime, I’ve been preparing a number of tracks for release. Watch this space! I will post details when they become available.

This site is very basic but, I plan on adding more content, including some of the stuff that was posted over the last 12 years or so before went down earlier this year.