There is a new revision of the Heptatonic Modes Matrix available!
This week’s MusicWeeklies Challenge :
Here are the numbers for my track enby:
|Time Signature||11/8 (I’m hoping we only need to make the numerator prime!)|
|Tempo||113 BPM (I entered “113” in to my DAW. It takes that as quarter notes per minute. )|
|Number of Voices||Seven (7): e-piano, two basses, three synths and, a drone|
I began with some vague ideas about structure including the overall form and what the underlying chords would include. In keeping with the non-binary theme, I used mostly a sus-2 chord that sounds neither major nor minor.
The melodies are a result of a kind of formula. I started with rhythms, chosen from a list of Euclidean Rhythms. I picked E(4,11), E(5,11) and E(6,11) for the A section. For the fast figure in the B section I used E(17,22). (Quick explanation of those numbers: the first number is the number of note-onsets, the second, the total number of pulses in the pattern. So E(4,11) has 4 notes in an 11 pulse pattern. The list of ones and zeroes show which pulses have the note-onsets. ) The topic of Euclidean Rhythms is really fascinating. The first half of the paper The Distance Geometry of Music, presents the topic without too much math and relates the topic to many examples found in World Music. I used the triad pairs technique, to select pitches for each note in the melody. I took some liberty, deleting some notes, almost at random, to thin out the sequences heard in the B section.
If you want a decoder ring for the part of music theory that’s all about seven note scales and their different modes, the Heptatonic Modes Matrix might be just what you need!
You can download a draft copy.
It is a work in progress. Send feedback to email@example.com
I’m going to disclose the command line that I use to generate a simple music video from an audio file and an image. The video will contain the original audio and an XY scope visualization of the audio superimposed over a custom background. The process depends on a few things:
- ffmpeg – search the internet for download and installation instructions suitable for your platform
- an image file to serve as a background. To use the command below, you’ll want to make a 1920×1080 image
- an audio file (of course!)
The following are four separate commands for the Windows command line. Other platforms will be slightly different. You’ll want to change the first three to refer to your own filenames. (You can guess the role of file in the process). The last command is doing the heavy lifting and is a single, really long, command. If you’re not familiar with using a command line interface (typing in commands to a big blank window) you’ll want to know how to copy-paste to a command window and, note that every character is essential.
SET IMAGEFILE="test.png" SET AUDIOFILE="test1.wav" SET OUTFILE="testout.mp4" ffmpeg -loop 1 -i %IMAGEFILE% -i %AUDIOFILE% -filter_complex "[1:a]avectorscope=s=1920x1080:draw=line:mode=lissajous_xy:rc=100:gc=120:bc=255:rf=9:gf=9:bf=9,format=yuv420p[v],[v]split[m][a];[m][a]alphamerge[keyed];[keyed]overlay=eof_action=endall" %OUTFILE%
That’s it! I hope it works for you.
If you like math, are interested in synthesis and need to write a paper, maybe this article will be of interest to you. I’m sharing a paper from a long time ago that documents my personal research in to the topic of FM synthesis, with CSound’s foscil opcode and a little math.
The references at the end of the paper might be reason enough to download.
I’m very excited to announce the Big Winky Media Marimba SFZ Instrument is available for purchase.
The instrument is the result of hours of reviewing, packaging and testing dozens of high quality samples from an an-echoic chamber at the University of Iowa. The cord-wound and yarn-wound mallet instruments feature three velocity layers each. A rubber mallet instrument is also included.
Some demonstrations are available below. The demos themselves are provided under the Creative Commons Attribution-ShareAlike license.
The first demo, More Linear by Dan Liszewski, uses the yarn-wound mallet samples.
The next demo, Shifting Paradigm, also by Dan Liszewski, features the yarn mallets as well.
if you don’t want to pay five bucks, check out the Free Marimba (rubber mallet only) soundfont.
I have packaged some of the marimba samples from The University of Iowa Musical Instrument Samples collection in to a free SoundFont (.sf2) for your unrestricted use. Go to the product page to download.
This release only features the rubber mallet samples. Let me know if you’d be interested in a package with the yarn mallet samples as well.
I think it was 1982, I wrote a paper for a course on technology and society. One of my sources was Alvin Toffler’s The Third Wave. The book makes a number of predictions for a post industrial world and was the primary source for my conclusion. (This scan shows a draft printed on continuous feed fan-fold “computer paper” by a line printer capable of only upper case letters! You can even see the holes on the left edge where the printer feed mechanism engaged the paper) Reading this 37 years later, it seems to describe the Internet, Spotify and SoundCloud!
The media will become “de-massified’ according to Alvin Toffler in post industrial society. No longer will single radio stations broadcast to thousands of anonymous listeners hungry for entertainment. Developments like cable communications will personalize listening even more than recordings have. Instead of collecting vinyl disks, listeners will be able to dial up select recordings through their cable system from a central library according to their own tastes. It is even possible that contributions to the central libraries may be made by members of the community.Daniel Liszewski 1982
Originally published: 2007-05-28 04:57:55
Original post URL: http://bigwinky.com/blog/?p=17
I am getting tired of deriving formulas for converting various MIDI data to real time. At the moment I’m wanting to make a standard MIDI file (SMF) to CSound .sco conversion utility. (Yes, I know there already is one) The most significant part of the task is to come up with some code that takes the time signature, tempo, and timing data and converts it to clock time (in seconds)
So here we go, with an attempt of memorializing this once and for all!
MIDI Division is the number of delta-times per quarter note. I’m calling the units of division
MIDI Tempo is specified in
A MIDI quarter-note is 24 MIDI-clocks. (clocks are not ticks. Clocks are not delta-times)
Some quantities comprising a MIDI time signature (we don’t need this for the problem at hand but, while I’m documenting this stuff, I’ll include it): numerator (beats-per-bar), denominator – as a negative power of two, number of MIDI-clocks per metronome click
[MIDI-clocks/beat] and number of 32nd notes per MIDI quarter (this last number I’m assuming is always constant equal to eight.
So, to convert a quantity of MIDI delta-times to seconds:
(numDeltaTimes[ticks] / division [ticks/quarter-note]) * tempo[microseconds/quarternote]*(1[second]/10^6[microseconds])
or with out all my ‘unit’ notation:
numTicks * tempo / (division * 10^6)
division are the raw quantities from the events and header of the MIDI file.
2019 is off to a good start with the return of bigwinky.com to cyberspace! During the downtime, I’ve been preparing a number of tracks for release. Watch this space! I will post details when they become available.
This site is very basic but, I plan on adding more content, including some of the stuff that was posted over the last 12 years or so before bigwinky.com went down earlier this year.