If you want a decoder ring for the part of music theory that’s all about seven note scales and their different modes, the Heptatonic Modes Matrix might be just what you need!
I’m going to disclose the command line that I use to generate a simple music video from an audio file and an image. The video will contain the original audio and an XY scope visualization of the audio superimposed over a custom background. The process depends on a few things:
ffmpeg – search the internet for download and installation instructions suitable for your platform
an image file to serve as a background. To use the command below, you’ll want to make a 1920×1080 image
an audio file (of course!)
The following are four separate commands for the Windows command line. Other platforms will be slightly different. You’ll want to change the first three to refer to your own filenames. (You can guess the role of file in the process). The last command is doing the heavy lifting and is a single, really long, command. If you’re not familiar with using a command line interface (typing in commands to a big blank window) you’ll want to know how to copy-paste to a command window and, note that every character is essential.
SET IMAGEFILE="test.png"
SET AUDIOFILE="test1.wav"
SET OUTFILE="testout.mp4"
ffmpeg -loop 1 -i %IMAGEFILE% -i %AUDIOFILE% -filter_complex "[1:a]avectorscope=s=1920x1080:draw=line:mode=lissajous_xy:rc=100:gc=120:bc=255:rf=9:gf=9:bf=9,format=yuv420p[v],[v]split[m][a];[m][a]alphamerge[keyed];[0][keyed]overlay=eof_action=endall" %OUTFILE%
If you like math, are interested in synthesis and need to write a paper, maybe this article will be of interest to you. I’m sharing a paper from a long time ago that documents my personal research in to the topic of FM synthesis, with CSound’s foscil opcode and a little math.
The references at the end of the paper might be reason enough to download.
I’m very excited to announce the Big Winky Media Marimba SFZ Instrument is available for purchase.
The instrument is the result of hours of reviewing, packaging and testing dozens of high quality samples from an an-echoic chamber at the University of Iowa. The cord-wound and yarn-wound mallet instruments feature three velocity layers each. A rubber mallet instrument is also included.
Some demonstrations are available below. The demos themselves are provided under the Creative Commons Attribution-ShareAlike license.
The first demo, More Linear by Dan Liszewski, uses the yarn-wound mallet samples.
The next demo, Shifting Paradigm, also by Dan Liszewski, features the yarn mallets as well.
I have packaged some of the marimba samples from The University of Iowa Musical Instrument Samples collection in to a free SoundFont (.sf2) for your unrestricted use. Go to the product page to download.
This release only features the rubber mallet samples. Let me know if you’d be interested in a package with the yarn mallet samples as well.
I think it was 1982, I wrote a paper for a course on technology and society. One of my sources was Alvin Toffler’s The Third Wave. The book makes a number of predictions for a post industrial world and was the primary source for my conclusion. (This scan shows a draft printed on continuous feed fan-fold “computer paper” by a line printer capable of only upper case letters! You can even see the holes on the left edge where the printer feed mechanism engaged the paper) Reading this 37 years later, it seems to describe the Internet, Spotify and SoundCloud!
The conclusion of a paper I wrote in 1982 seems to forecast music streaming services and self service audio distribution platforms.
The media will become “de-massified’ according to Alvin Toffler in post industrial society. No longer will single radio stations broadcast to thousands of anonymous listeners hungry for entertainment. Developments like cable communications will personalize listening even more than recordings have. Instead of collecting vinyl disks, listeners will be able to dial up select recordings through their cable system from a central library according to their own tastes. It is even possible that contributions to the central libraries may be made by members of the community.
I am getting tired of deriving formulas for converting various MIDI data to real time. At the moment I’m wanting to make a standard MIDI file (SMF) to CSound .sco conversion utility. (Yes, I know there already is one) The most significant part of the task is to come up with some code that takes the time signature, tempo, and timing data and converts it to clock time (in seconds)
So here we go, with an attempt of memorializing this once and for all!
MIDI Division is the number of delta-times per quarter note. I’m calling the units of division [ticks/quarter-note]
MIDI Tempo is specified in [microsecond/quarter-note]
A MIDI quarter-note is 24 MIDI-clocks. (clocks are not ticks. Clocks are not delta-times)
Some quantities comprising a MIDI time signature (we don’t need this for the problem at hand but, while I’m documenting this stuff, I’ll include it): numerator (beats-per-bar), denominator – as a negative power of two, number of MIDI-clocks per metronome click [MIDI-clocks/beat] and number of 32nd notes per MIDI quarter (this last number I’m assuming is always constant equal to eight.
So, to convert a quantity of MIDI delta-times to seconds:
2019 is off to a good start with the return of bigwinky.com to cyberspace! During the downtime, I’ve been preparing a number of tracks for release. Watch this space! I will post details when they become available.
This site is very basic but, I plan on adding more content, including some of the stuff that was posted over the last 12 years or so before bigwinky.com went down earlier this year.