PodcastsEducaciónHacker Public Radio

Hacker Public Radio

Hacker Public Radio
Hacker Public Radio
Último episodio

238 episodios

  • Hacker Public Radio

    HPR4618: Simple Podcasting - Episode 2 - Basic Filtering

    15/04/2026
    This show has been flagged as Clean by the host.

    Basic-Filtering







    01 Introduction



    This is the second episode in a four part series on a simple way to create your own HPR podcast episode.







    02



    This episode will cover the following topics:



    Basic filtering..



    De-essing to improve voice quality.



    And normalizing to adjust audio levels for easier reviewing.







    03



    Filtering is removing unwanted noise from an audio signal.



    There are several ways of doing this.



    It is possible to do this with Audacity, but I don't know how so I won't try to describe that method.







    It is possible however to filter using command line tools such as FFMPEG and Sox.



    When assembled into shell scripts, these tools can become part of an automated process that you can use over and over again for each HPR episode that you record.







    04



    In a later episode I will discuss how to analyze audio signals to find the sources of noise that can be reduced or eliminated with filters.



    In this episode however I will discuss basic filtering that you can apply routinely without doing any analysis beforehand.







    05 Sources of Noise



    A question that you may have is "why is there noise in the recording?"



    There are many sources of undesirable noise.







    06



    A very common one that you may not be aware of is electrical noise that works its way into the electronic recording circuits and is imperceptible to you until you play back the recorded audio.



    The most common noise signal is what is commonly called "line noise" and is a low frequency hum at 50 or 60 Hz from the electric power lines and reflects the 50 or 60 Hz frequency of the AC power lines feeding your recording hardware.







    07



    You may be familiar with this low frequency hum from when it emanates from large electrical hardware such as transformers as it makes the laminations vibrate. However, it can also work its way indirectly into electronic equipment as well.



    Good quality audio hardware may filter all or most of this out, but it is present in a lot of consumer grade hardware.







    08



    Other sources of electrical noise may reflect specific problems in your recording hardware. I will discuss one such problem with my microphone that I had to address.



    Still other sources of noise may reflect actual physical audio noise around you, such as fans. Placing the microphone close to your face will help in dealing with a lot of these problems, but you may find filtering to be of some help here as well.











    09 Audio Frequency Range



    Let's start with some basics.



    A good quality stereo of the type you may have at home is typically rated to perform between 20 Hz and 20 kHz.



    This is the widest possible range that we need to consider.



    In reality, this is a far wider range than is needed for a voice oriented podcast.



    It is also well beyond the range of the hardware that many of your listeners will be using to listen to the podcast.







    10



    For example, the speakers that I have connected to my PC and a number of headphones and earphones that I have tested drop off drastically below 80 Hz or above 8 kHz, or even above 6 kHz in many cases.



    This is not audiophile quality hardware, but it is representative of the sort of hardware that a lot of your listeners will be using when listening to podcasts.



    And to be honest here, a lot of people will have difficulty hearing anything above 8 kHz even with the best quality audio hardware due to hearing loss from environmental noise exposure or age.







    11



    You can get a good idea of what different frequencies sound like by generating sine waves using either FFMPEG or Sox.







    Here's an example of generating a 1 kHz sine wave using FFMPEG. A copy of this will be in the show notes.



    ffmpeg -f lavfi -i "sine=frequency=1000:sample_rate=44100:duration=3" 01000hz.flac







    This creates a sine wave at 1 kHz and at a sample rate of 44.1 kHz for a duration of 3 seconds and saves it to a flac file named 01000hz.flac







    12



    Here's the same using Sox.



    sox -n -r 44100 -b 16 01000hz.flac synth 3 sine 1000



    The -b 16 specifies using 16 bit audio to encode it, and the "sine 1000" element specifies the frequency in hertz.







    13



    You can test this out at different frequencies to get a feel for how your hardware responds.







    What the effective limits on typical hardware audio range means is that we can quite safely filter out a large part of what is considered to be the "audio range" without any noticeable loss of quality.



    For the purposes of our discussion here then I will limit the frequency range to between 80 Hz and 12 kHz, and that is being generous. You can probably narrow that, particularly at the top end, without any problems.







    14



    At the low end, the typical rule of thumb recommended by most people seems to be that for the average male voice you can set the lower threshold at 80 Hz, and for the average female you can set it at 160 Hz.



    Note that you don't *have* to set the threshold higher for a female. Rather, it is just that you typically *can* set it higher if you wish. Note also that these are averages, and may not reflect an actual individual.











    15 Simple Filters



    We will now create some simple filters using the same command line software mentioned in a previous episode in this series.



    These are FFMPEG and Sox.







    16



    First let's define some terminology.



    A high pass filter passes through frequencies which fall above a certain threshold and blocks frequencies which are below that frequency.



    A low pass filter passes through frequencies which fall below a certain threshold and blocks frequencies which are above that frequency.







    17



    In reality there isn't an abrupt cut-off in the filters. Instead there is a gradual roll off or sloping off of amplitude below or above the specified filter frequency.



    This is for two reasons. One is that if there was an abrupt cut off then it would risk introducing audible distortion in the signal for frequencies on the margin.







    18



    The other reason is that this is how hardware filters traditionally inherently worked when they were made out of electronic components such as resistors, capacitors, and inductors.



    The sharpness of this cut off can be adjusted, but we won't be fiddling with it in that sort of detail.



    You will sometimes see filters specified in terms of "poles". This has to do with describing how filters were constructed using electronic components. Don't worry about it, it doesn't really matter.







    19



    Here is a typical high pass filter using ffmpeg which filters out frequencies below 80 hertz.



    # High pass filter.



    ffmpeg -i inputfile.flac -af "highpass=f=80" outputfile.flac







    Here is a typical low pass filter using ffmpeg which filters out frequencies above 12 kHz.



    # Low pass filter.



    ffmpeg -i inputfile.flac -af "lowpass=f=12000" outputfile.flac







    20



    Here is a filter which combines the two.



    # Combined filters.



    ffmpeg -i inputfile.flac -af "highpass=f=80, lowpass=f=12000" outputfile.flac







    And here is the same thing using Sox.



    sox inputfile.flac outputfile.flac highpass 80 lowpass 12000











    21 Filtering Out Specific Frequencies



    Recall that I mentioned that a common source of noise is the 50 or 60 Hz AC power line frequency working its way through the electronics of your recording device.



    Because filters operate gradually and the 80 Hz lower filter threshold is close to 60 Hz, the high pass filter may not deal with this adequately.







    22



    Now it happens that your listeners may not be able to hear this 50 or 60 Hz noise anyway because their audio hardware won't reproduce it.



    That by the way includes you not being able to hear it either when you review your recording before uploading it.



    However, there may be some HPR listeners who are sitting back sipping a glass of wine and listening to your episode on their stereo and who can hear it.



    That suggests that we ought to do something about it just in case.







    23



    I will get into how to analyze audio signals in a later episode, but for now just accept that I looked at the frequency spectrum of a sample recording using my hardware and found a large 60 Hz noise spike which I wanted to address.







    24



    Experimenting with additional high pass frequencies up to 120 Hz did not improve things much with respect to the 60 Hz problem. There are other parameters which could be tweaked, but at this point it would seem most promising to attack the 60 Hz spike problem directly using a different filter method.







    To deal with the this 60 Hz spike we can use a "band reject" filter, which removes a specific band of frequencies. We will use this in combination with the filtering that we have already done above.







    25



    After a small amount of experimenting I came up with the following. I also added in a 50 Hz filter while I was at it, for the benefit of those living in areas with 50 Hz electrical supply.







    These filters will be included in the show notes, so don't worry if you can't quite understand all the details from a verbal description.







    26



    Here's the FFMPEG version.







    # Using ffmpeg



    ffmpeg -i input.flac -af "highpass=f=80, lowpass=f=12000, bandreject=f=60:width_type=h:w=20, bandreject=f=50:width_type=h:w=20" output.flac







    27



    This as the following elements



    A high pass filter at 80 Hz,



    A low pass filter at 12 kHz,



    A band reject filter centred at 60 Hz and with a width of 20 hertz.



    A similar band reject filter centred at 50 Hz.







    28



    Here's the Sox version.







    # Sox version.



    sox input.flac output.flac highpass 80 lowpass 12000 bandreject 60 20 bandreject 50 20







    Note that with sox, don't quote the filter definition strings or else it will result in an error as sox doesn't see enough parameters. This is not a problem with ffmpeg.







    29



    The band reject filter knocks the stuffing out of the 60 Hz line noise, and the 50 Hz parameter should do the same for that frequency as well.







    This basic filter should be able to be applied to any podcast audio recording without causing any problems. You can probably reduce the low pass frequency from 12 kHz down to 8 kHz without any problems, but I would suggest that you test it with your voice before making that decision.







    30



    I will come back to filtering out specific frequencies again later when I discuss how I solved a specific problem with the hardware that I am using. However, we have to discuss how to analyze audio signals before we can do that sort of technical troubleshooting, and I will cover that in a later episode.











    --------------------







    31 De-Essing



    An additional type of filtering is "de-essing".



    When recording audio, the microphone or environment may result in "s", "sh", "ch" and possibly other sounds to be exaggerated.



    These are all higher frequency elements of voice recordings.



    "De-essing" attempts to soften these sounds by selectively reducing the volume on the frequency band which contains these sounds.











    32 Software Filters



    De-essing is accomplished via software filters.



    FFMPEG and Sox both have de-essing filters.



    For FFMPEG, the de-essing filter is built in.



    For Sox however, we must install an optional plug-in. I will cover this is more detail when I discuss using Sox for de-essing.







    33 Do You Need De-Essing?



    The first thing to make clear however, is that you may not need to worry about this.



    If you think the audio sounds just fine the way it is, you don't need to do any de-essing to it.



    De-essing is a very subtle change, and you would probably need to do some careful before and after comparisons of audio samples to tell the difference.



    I didn't know that a thing called de-essing even existed before I started doing the research to make this podcast episode.



    However, at this point we are doing things more for fun than out of necessity, so I'll describe it anyway.







    34 De-Essing with FFMPEG



    De-essing with FFMPEG is relatively simple.



    The filter is built in, and there are just three values to adjust.



    On the other hand, it is not really obvious what these values mean in practical terms.







    35



    I will however warn you to not rely on the AI search results from Google to understand this feature.



    The AI, in my experience, just makes stuff up about it and tells you to use options that don't exist and values that are not valid.



    I found that the only useful information came from FFMPEG's own web site, and from examples written by actual humans.







    36



    I then experimented with different values to see what effects they had.



    Since the results are rather subtle, fine tuning isn't really that necessary and I found that I could arrive at some reasonable values fairly quickly.



    I will provide the parameters that I found useful for me, and I suspect they would probably work for you as well.







    37



    Here is a typical de-essing command.



    ffmpeg -i inputfile.flac -filter_complex "deesser=i=0.5:m=0.5:f=0.5:s=o" -b:a 336k -sample_fmt s16 outputfile.flac







    38



    The important arguments are i, m, and f.







    i is intensity for triggering de-essing.



    The allowed range is 0 to 1. The default is 0.



    By experimentation I found that "0" means no de-essing, and "1" is maximum de-essing.



    I found that setting it to "0.5" gave satisfactory results.







    39



    m is the amount of "ducking on the treble part of sound".



    The allowed range is 0 to 1. The default is 0.5.



    By experimentation I found that "1" means no de-essing, and "0" is maximum de-essing.



    I found that setting it to "0.5" gave satisfactory results.







    40



    f is how much of the original frequency content to keep when de-essing.



    The allowed range is 0 to 1. The default is 0.5.



    By experimentation I found that "1" means no de-essing, and "0" is maximum de-essing.



    I found that setting it to "0.5" gave satisfactory results.







    41



    Setting "m" or "f" too high can result in a distorted output as too much of the original sound is cut out.



    The defaults of 0.5 in both cases gave audible improvements without noticeable distortion.







    42



    There is an additional parameter called "s". This controls whether the de-essing filter does anything.



    Setting it to "o" is the normal and default mode.



    Setting it to "e" causes it to output just the components that it would normally have filtered out. This is useful for testing purposes so you can see what and how much is being filtered. You only use this when experimenting with different values.



    Setting it to "i" causes the input to be passed through without de-essing. This would be useful in scripts where you want to use a variable to control whether or not to use the de-esser while still creating the expected output file.







    43



    There are two other elements of the command which were included but are not strictly speaking part of the de-essing filter itself .



    These are " -b:a 336k" and "-sample_fmt s16".



    " -b:a 336k" sets the audio bit rate to 336k.



    "-sample_fmt s16" sets the audio sample format to 16 bit.







    I found it necessary to specify these in order to prevent the de-essing filter from changing formats.



    They are not part of de-essing however.











    44 De-Essing with Sox



    You can also de-ess with Sox.



    However, this is more complex for several reasons.



    One reason is that Sox does not have its own de-essing filters. Instead it uses optional plug-ins, and you must find and install these.



    The actual plug in may vary depending on what operating system you are using.



    The other reason is that it deals with the issue in fairly low level parameters, and so is a bit more complex to describe.



    Because of this I will skip over describing this in detail and just give a very brief overview. If anyone would like me to describe in more detail how to de-ess with Sox, then send in a comment and I will do a short episode on it later.







    45 Sox De-Essing Overview



    To de-ess with Sox, you first need to install the plug-ins.



    On Linux, these will be the TAP ladspa plug-ins.



    TAP stands for "Tom's Audio Processing" plugins.



    ladspa stands for "Linux Audio Developer's Simple Plugin API"







    To install the TAP plugins on Ubuntu, using the following command.



    sudo apt install tap-plugins







    The plug-in we need is called "tap_deesser.so".







    46



    In order to use the plug-ins, you need to set the path as a variable.



    On Ubuntu this is.



    export LADSPA_PATH="/usr/lib/ladspa:"



    I put the above in the shell script which calls the Sox de-esser.







    47



    To use the Sox de-esser, you do the following:



    sox inputfile.flac outputfile.flac ladspa tap_deesser tap_deesser -30 4500







    48



    tap_deesser tap_deesser tells it which plugin to use. We need to state tap_deesser twice because the first is the name of the ".so" file and the second is the name of the plugin. A single "so" file can contain multiple filters, although in this case there is only one.



    -30 is the threshold in dB at which to start to apply the filter.



    4500 is the frequency in Hz that the filter centres around.







    49



    The TAP web page has a table of recommended frequencies. These are:







    Male 'ess' 4500 Hz



    Male 'ssh' 3400 Hz



    Female 'ess' 6800 Hz



    Female 'ssh' 5100 Hz







    You will need to do some trial and error to find what works best for you.











    50 De-Essing Summary



    De-essing can be used to make minor improvements to voice quality by reducing certain harsh sounds which may be exaggerated by a microphone.



    If it sounds like a lot of work you can probably simply not bother with it and not really miss it.











    --------------------







    51 Normalizing







    Normalizing a signal means adjusting it to meet a specified level.



    For audio it means adjusting the volume or sound level.



    You may wish to normalize the audio of your recording to make it easier to listen to when reviewing it.



    The copy that you send to HPR however should be the original un-normalized version.







    52



    Sound level is measured in two ways, dB and LUFS. The latter is a more sophisticated way of measuring things which takes into account how the human ear perceives loudness. I won't go into a lot of detail in that regards, other than to say that just accept LUFS as a unit of perceived loudness that is the international standard. LUFS stands for "Loudness Units referenced to Full Scale", and is part of the EBU R128 standard, where EBU stands for European Broadcast Union.







    In both cases the measured value is a negative number, with numbers smaller in magnitude being louder. Smaller in magnitude means closer to zero.







    53



    HPR will adjust the sound level for publication, but if you wish to check the audio before uploading it can help to adjust it to something close to what HPR will do so that you can listen to it at a volume which most listeners will hear.



    In my case full volume on the audio system input produced a sound level which was much lower than a typical HPR episode.



    However, the volume level in the flac file itself can be adjusted using ffmpeg.







    54 Measuring Volume Level



    First we need to see what the volume level is for a typical HPR podcast.







    To do this we use ffmpeg.



    In this example we are using an episode named "hprpodcast.mp3".



    Pick an episode which you think is suitable and copy the file to the working directory.







    55



    In the following script we use a volumedetect filter.



    The text we want normally outputs to standard error, so we have to do a bit of bashery to redirect this to standard output so it will go through a pipe.



    We then grep for the string "I:".



    This will have the average volume level in "loudness units" (LUFS).



    Then we extract the number, giving us a target LUFS level.







    56



    ffmpeg -i hprpodcast.mp3 -filter:a ebur128=framelog=quiet -f null /dev/null 2>&1 | grep "I:" | cut -d: -f2







    57



    Unfortunately I can't find a Sox feature which handles EBU loudness, so we need to work in dB instead.







    Here is the sox version.



    However, note that this may not work on mp3s if sox mp3 handing is not installed.







    58



    sox hprpodcast.mp3 -n stats 2>&1 | grep "RMS lev dB" | rev | cut -d" " -f1 | rev







    59



    You can use either of these for measuring the volume or sound level of an audio file.



    However, note that individual episodes from HPR may vary a bit in terms of loudness. In the samples that I looked at, this however was less than 1 LUFS or dB while my own recording was roughly 5 LUFS lower in volume than a typical HPR episode.







    --------------------







    60



    If you Google for the EBU R128 standard the AI result will confidently tell you to use a target of -23 LUFS. However, this is wrong, which shouldn't be of any surprise if you are familiar with using AI.







    61



    The -23 LUFS figure is for broadcast television. There is in fact no standard level for podcasts. However, there is apparently a general industry convention of using somewhere around -17 LUFS. If I look at the first two HPR episodes that I did, HPR normalized them to -16.8 LUFS and -17.8 LUFS, while the original FLAC files that I submitted were -21.6 LUFS and -22.3 LUFS respectively.







    62



    So HRP appear to be targeting somewhere around -17 LUFS as well. We will therefore use -17 LUFS as our target for our own copy for review.







    --------------------







    63



    The nice thing about using the EBU filter in FFMPEG is that this is very simple.



    Here is the FFMPEG version.







    64



    ffmpeg -i inputfile.flac -af loudnorm=I=-17:TP=-2.0:LRA=7.0 -ar 44.1k outputfile.flac







    65



    "I" is the LUFS target.



    LRA is the loudness range target. The default value is 7.0 so I used that.



    TP sets the maximum true peak. The default value is -2.0. so I used that.







    --------------------







    66



    With Sox things are a bit more difficult. There is no direct method of setting the loudness that I am aware of, so we need to measure the current sound level in dB, do some calculations, and then apply that as a gain factor to the output.







    67



    First we need to subtract the measured db level from our flac file from the target db level from the HPR episode we decided to use as a sample.



    Bash by itself normally just does integer math.



    However, we would like to have at least one decimal point of resolution to work with.



    The simple solution is to do this calculation using bc, the shell arbitrary precision calculator.







    68



    Then take this new value and use it in a "volume" filter.



    The number which we give sox is the amount to increase or decrease the volume by.



    Sox will then output a new file with the new volume level.



    You can now listen to this file under conditions more closely approximating what it will be like after HPR have done their own audio adjustments and normalizaton on it



    This helps when listening to the file for any problems before you upload it.







    69



    Rather than reading 5 lines of complex shell script to you, I will put a copy of it in the show notes.







    level=$( sox $inputfile -n stats 2>&1 | grep "RMS lev dB" )



    leveldb=$( echo "$level" | rev | cut -d" " -f1 | rev )



    targetdb="-18.9"



    volumechange=$(echo "scale=2 ; $targetdb - $leveldb" | bc )







    sox $inputfile $outputname gain "$volumechange"











    --------------------







    70



    Normalization should be the last thing you do to the file.



    It should be done after any noise filtering, such as low pass, high pass, bandreject, etc.



    If you normalize first, you will be amplifying the noise as well as the desired signal.







    71



    The exact normalization level used for review purposes doesn't matter, as HPR will apply their own later.



    All we are doing at this point is adjusting the volume to something which approximates a normal episode so you can listen to it for final review.







    72



    When you send your file to HPR, send the original *unnormalized* version, not the normalized version.



    When you normalize an audio signal, if you are not careful you may introduce things which cause problems with later additional processing.



    HPR probably do more things to the audio than just normalizing and so they need the unnormalized file so that they can do their own normalizing last.







    --------------------







    73



    If at this point you are happy with the recording as is, you are ready to send the *unnormalized* version to HPR.







    The scripts to implement the features discussed in this episode will be in the show notes.











    74 Conclusion



    In this episode we covered basic filtering using ffmpeg and sox.



    We discussed what noise was and some of the origins of noise.



    We talked about the audio frequency range and the limitations of common hardware used to record and listen to podcasts.



    We covered basic high and low pass filters used to limit the audio frequency range in order to remove possible low and high frequency noise.







    75



    We discussed specific filters to eliminate 50 and 60 Hz electrical power noise.



    We talked about de-essing, what it was, why you may wish to use it, and some basic de-essing filter implementation details.



    We discussed normalizing, what it is, why you may wish to use it, and how it relates to podcasting conventions.







    76



    In the next episode we will discuss analyzing audio signals to help find the sources of noise problems.



    We will also discuss creating filters to eliminate any problems that we found.



    In my case I had a problem with the microphone that I use, and I describe how I used filters to deal with that problem.







    77



    This has been the second episode in a four part series on simple podcasting.







    --------------------







    EBU R128 Loudness Measurement using FFMPEG







    #!/bin/bash







    echo "EBU r128 loudness measurement using FFMPEG"



    for inputfile in *.flac *.mp3 ; do



    level=$( ffmpeg -i $inputfile -filter:a ebur128=framelog=quiet -f null /dev/null 2>&1 | grep "I:" | cut -d: -f2 )



    echo $inputfile $level



    done







    --------------------







    DB Sound Level Measurement using Sox







    #!/bin/bash



    # Sox version. May not work for mp3 if an mp3 format handling is not installed.



    echo "dB sound level measurement using Sox."



    for inputfile in *.flac *.mp3 ; do







    level=$( sox $inputfile -n stats 2>&1 | grep "RMS lev dB" )



    leveldb=$( echo "$level" | rev | cut -d" " -f1 | rev )



    echo $inputfile $leveldb



    done







    --------------------







    EBU R128 Loudness Normalization using FFMPEG







    #!/bin/bash







    # Adjust the volume to a desired level.



    for inputfile in *.flac ; do



    j=$( basename $inputfile ".flac" )



    outputname="$j""-normff.flac"







    ffmpeg -i $inputfile -af loudnorm=I=-17:TP=-2.0:LRA=4.0 -ar 44.1k $outputname



    echo $outputname



    done







    --------------------







    DB Sound Level Normalization using Sox







    #!/bin/bash







    # Adjust the volume to a desired level.



    for inputfile in *.flac ; do



    j=$( basename $inputfile ".flac" )



    outputname="$j""-normff.flac"



    # Measure the volume level and extract the mean volume.



    level=$( sox $inputfile -n stats 2>&1 | grep "RMS lev dB" )



    leveldb=$( echo "$level" | rev | cut -d" " -f1 | rev )



    # Calculate the difference in dB desired. Scale specifies the number of decimal places.



    # Target db is the volume measured on hpr4506 (UCSD-P-System).



    targetdb="-18.9"



    volumechange=$(echo "scale=2 ; $targetdb - $leveldb" | bc )



    echo "Using sox: File: $inputfile Original level: $leveldb Change by: $volumechange"







    # Adjust the volume.



    sox $inputfile $outputname gain "$volumechange"







    done







    --------------------







    Full processing pipeline for making simple podcasts using FFMPEG







    #!/bin/bash







    #!/bin/bash







    # Full processing pipeline for making simple podcasts.







    # ======================================================================



    # Concatenate multiple flac files into a single flac file.



    # This is used to combine podcast recorded segments into a single



    # flac file for uploading to HPR.







    concataudio ()



    {



    outputname="$1"







    # First create the list file.



    printf "file '%s'\n" [0-9][0-9].flac > podseglist.txt







    # Now concatenate them



    ffmpeg -f concat -safe 0 -i podseglist.txt "$outputname"







    rm podseglist.txt



    }







    # ======================================================================







    # Basic filters.



    filter ()



    {



    inputfile=$1



    outputname=$2







    # Using ffmpeg.







    # The high and low pass filters.



    hlpfil="highpass=f=80, lowpass=f=12000"







    # Band reject filters filter for 60Hz and another for 50Hz.



    linefil="bandreject=f=60:width_type=h:w=20, bandreject=f=50:width_type=h:w=20"







    # Using ffmpeg



    ffmpeg -i $inputfile -af "$hlpfil, $linefil" $outputname



    }







    # ======================================================================







    # De-Essing.



    deessing ()



    {



    inputfile=$1



    outputname=$2



    option=$3







    # De-essing filter.



    ffmpeg -i $inputfile -filter_complex "deesser=i=0.5:m=0.5:f=0.5:s=$option" -b:a 336k -sample_fmt s16 $outputname







    }







    # ======================================================================



    # Normalizing the audio to EBU R128 standard for review using ffmpeg.



    normffmpeg ()



    {



    inputfile=$1



    outputname=$2







    # Normalize to EBU R128 standard.



    ffmpeg -i $inputfile -af loudnorm=I=-17:TP=-2.0:LRA=4.0 -ar 44.1k $outputname







    }







    # ======================================================================







    # Output an MP3 version to help with reviewing.



    mp3convert ()



    {



    inputfile=$1







    # Get the name of the file and then create the output file name.



    j=$( basename $inputfile ".flac" )



    outputname="$j"".mp3"







    # Convert to MP3.



    ffmpeg -i $inputfile $outputname



    }







    # ======================================================================







    # Concatenate the separate audio files.



    concataudio fullpod-unfiltered.flac







    # Basic filtering.



    filter fullpod-unfiltered.flac filtered.flac







    # De-essing. This is the version to send for publishing.



    # The third argument should be "o" for de-essing, or "i" for pass through without de-essing.



    deessing filtered.flac fullpod.flac o







    # Normalized for review.



    normffmpeg fullpod.flac fullpod-norm.flac







    # Output an MP3 copy for review.



    mp3convert fullpod-norm.flac















    --------------------



    --------------------





    Provide feedback on this episode.
  • Hacker Public Radio

    HPR4617: UNIX Curio #4 - Archiving Files

    14/04/2026
    This show has been flagged as Clean by the host.

    This series is dedicated to exploring little-known—and occasionally useful—trinkets lurking in the dusty corners of UNIX-like operating systems.


    When you think about creating and managing archives on a UNIX system,
    tar
    is probably the utility that comes to mind. But this was not the first archiving program;


    ar




    was in First Edition UNIX


    1
    and


    cpio




    also pre-dates it, sort of


    2
    . According to the NetBSD manual page,
    cpio
    was developed within AT&T before
    tar
    , but did not get widely released until System III UNIX after
    tar
    was already well known from the earlier release of Seventh Edition UNIX (a.k.a. Version 7).



    You might think that
    ar
    and
    cpio
    are old and irrelevant these days, but these formats do live on. Each
    Debian package file


    3
    is an
    ar
    archive which in turn contains two
    tar
    files. On Red Hat, Fedora, SUSE, and some other distributions, each
    .rpm package file


    4
    contains a
    cpio
    payload. So these may very well still be in use on your modern Linux system.



    But let's get back to the subject of what you might want to use to create archives today. The
    tar
    utility has persisted in its popularity over the decades, and you most probably have a version installed on your UNIX-like systems. One of the problems with
    tar
    , however, is that it has not kept a consistent file format. Also, different implementations have used differing syntax at times.



    There are excellent reasons for the
    file format changing


    5
    . The names people give files have gotten longer over time, and the original Seventh Edition
    tar
    format could only handle a total pathname length of 100 bytes for each archive member. In addition, filenames were in ASCII format, and modern filesystems now accommodate richer encodings with characters that aren't in ASCII. The size of each archive member was limited to 8 gigabytes—unthinkably large back then, but not so big these days. User and group ownership could only be specified by numeric ID, which can vary from one system to another. Many other types of files and information simply couldn't be stored: block and character device nodes, FIFOs, sockets, extended attributes, access control lists, and SELinux contexts.



    As a result, the
    tar
    format had to evolve over the years. One important version was the
    ustar
    format, created for the 1988 POSIX standard. The POSIX committee wanted to try standardizing both the file format and syntax for the
    tar
    command. While the
    ustar
    format addressed some shortcomings, progress marched on. Filesystems started allowing filenames in different character sets and more types of information to be attached to files, so for the 2001 revision of POSIX they gave up on standardizing the
    tar
    utility and came up with a new format and utility, which is our actual UNIX Curio for this episode:


    pax




    6
    . Since the
    pax
    program didn't have historical baggage, they could specify its options, behavior, and file format and be sure everyone's implementation would match. Developers of different
    tar
    implementations had been reluctant to change away from their historical option syntax to the standard. The
    pax
    utility was also an attempt to avoid taking sides between those who advocated for
    tar
    and fans of
    cpio
    . The
    pax
    file format was an extension of
    ustar
    with the ability to add arbitrary new attributes tied to each archive member as UTF-8 Unicode. Some of these attribute names were standardized, but implementers could also define their own, making the format more future-proof. Older versions of
    tar
    that could handle the
    ustar
    format should still be able to process
    pax
    archives, but might not know what to do with the extra attributes.



    GNU
    tar
    developed its
    current archive format


    7
    alongside the standardization of the
    ustar
    format. The GNU format was based on an early draft which later underwent incompatible changes, so the two unfortunately are not interchangable. Unlike
    ustar
    , the GNU format has no limits on the size of files or the length of their names. In addition to its own format, GNU
    tar
    is able to detect and correctly process both
    ustar
    and
    pax
    archives. In situations where its native format can't store necessary information about a file (such as POSIX access control lists or extended attributes), GNU
    tar
    will automatically output the
    pax
    format instead (called "posix" in documentation). However, it still uses the GNU format by default, though the documentation has been
    threatening to move to the POSIX format for at least 20 years


    8
    .



    The good news is that the
    ustar
    ,
    pax
    , GNU
    tar
    , and Seventh Edition
    tar
    formats are well documented and utilities across many UNIX-like systems
    2,7,9,10,11
    are able to handle these, depending on which formats existed when the utility was developed. While your system may not have
    pax
    itself installed, there are other archiving utilities that can read the file format, including GNU
    tar
    . (Somewhat amusingly, Debian and some other Free Software operating systems package a


    pax




    utility developed by MirBSD


    12
    which largely follows the POSIX-specified interface, but doesn't support reading or writing archives in
    pax
    format!) Look at the manual page for the
    tar
    ,
    cpio
    , or
    pax
    utilities on your system to see if they can handle
    pax
    archives.



    Perhaps one aspect that has worked in favor of
    tar
    and other UNIX archive formats is that they only concern themselves with storing files and make no attempt at compression. Instead, it is common for a complete archive file to be compressed
    after
    creation; many utilities can be told to do this step for you, but it is not typically the default behavior. Therefore, if a better compression method comes along, the archive format doesn't need to change. If you do use compression, be careful to choose a method that is available on the destination system. Compressing files is a big enough subject to deserve its own episode, so we won't talk more about it here.



    So which format should you use when creating an archive? Unfortunately, there is no single answer that applies in all circumstances. The
    pax
    format is supported among modern UNIX-like systems and can represent all types of files and metadata. While other systems, their filesystems, and archive utilities might not be able to properly make use of all the metadata, they should at least be able to extract the data contained in files and, if Unicode is supported, give them appropriate filenames. If you intend to unpack the archive on an older system, more research might be needed to figure out what formats it is able to handle. The Seventh Edition
    tar
    format (often called "v7") is widely supported, including by older systems, but has limitations in what it can contain as described earlier.



    Moving beyond the UNIX world, things get even more complicated. Apple's macOS, with its FreeBSD underpinnings, easily handles
    tar
    files. However, when it comes to MS-DOS and Windows, it's a bit different. There, a multitude of archiving programs and formats arose, usually combining archiving with compression.
    PKZIP
    was probably the most popular of these and its .zip format became common in many places, helped by the fact that PKWARE openly published the specification. While there is only a single .zip format, it has many options, some proprietary, and different implementations have diverged in the way some aspects are handled (or not handled). An
    ISO/IEC standard for .zip


    13
    was published in 2015 giving a baseline profile, and sticking to it produces files that can be widely extracted successfully. Other file formats like OpenDocument use the .zip format and typically hew to the standardized profile.



    Windows' File Explorer, starting with Windows XP, can
    natively extract .zip files


    14
    . The
    Info-ZIP program


    15
    is a Free Software implementation for a wide variety of systems (even rather obscure ones); while it might not be installed on yours, if you're copying the archive file over, you can probably copy over its
    unzip
    utility at the same time to unpack it. So .zip probably has the broadest support, although it might not already be present on every system. However, as Klaatu points out in
    Hacker Public Radio episode 4557


    16
    , .zip files and applications handling them aren't always great at maintaining metadata about files. The .zip format doesn't seem to have any way to represent UNIX file permissions, and user/group ownership can only be included as numeric IDs. Other types of metadata on UNIX-like systems are not saved at all. This is probably not a problem in some cases, such as with a collection of photos, but for others it might be a concern.



    While
    pax
    as a utility does not seem to have gained much popularity or support, except on commercial UNIX systems where including it was required to conform to the POSIX standard, its file format has persisted. Free Software systems have generally avoided the
    pax
    interface, preferring to stick with the
    tar
    utility on the command line, but usually have good support for archive files in the
    pax
    format. Outside of UNIX-like systems, .zip seems to have become the most common file format, and support for it is also good in the UNIX world, though it might not be built in.



    References:







    Archive (library) file format
    https://man.cat-v.org/unix-1st/5/archive





    NetBSD 10.0 cpio manual page
    https://man.netbsd.org/NetBSD-10.0/cpio.1





    Debian binary package format
    https://manpages.debian.org/trixie/dpkg-dev/deb.5.en.html





    RPM V6 Package format
    https://rpm.org/docs/6.0.x/manual/format_v6.html





    NetBSD 10.0 libarchive-formats manual page
    https://man.netbsd.org/NetBSD-10.0/libarchive-formats.5





    Pax specification
    https://pubs.opengroup.org/onlinepubs/009695399/utilities/pax.html





    GNU tar manual
    https://www.gnu.org/software/tar/manual/tar.html





    GNU tar manual for version 1.15.90
    https://web.cvs.savannah.gnu.org/viewvc/*checkout*/tar/tar/manual/tar.html?revision=1.3





    FreeBSD 15.0 libarchive-formats manual page
    https://man.freebsd.org/cgi/man.cgi?query=libarchive-formats&sektion=5&apropos=0&manpath=FreeBSD+15.0-RELEASE+and+Ports





    OpenBSD 7.8 tar manual page
    https://man.openbsd.org/OpenBSD-7.8/tar





    HP-UX Reference (11i v3 07/02) - 1 User Commands N-Z (vol 2)
    https://support.hpe.com/hpesc/public/docDisplay?docId=c01922474&docLocale=en_US





    MirBSD pax(1) manual page
    http://www.mirbsd.org/htman/i386/man1/pax.htm#Sh.STANDARDS





    ISO/IEC 21320-1:2015 Information technology - Document Container File Part 1: Core
    https://www.iso.org/standard/60101.html





    Mastering File Compression on Windows
    https://windowsforum.com/threads/mastering-file-compression-on-windows-how-to-zip-and-unzip-files-effortlessly.369235/





    About Info-ZIP
    https://infozip.sourceforge.net/





    HPR4557::Why I prefer tar to zip
    https://hackerpublicradio.org/eps/hpr4557/index.html







    Provide feedback on this episode.
  • Hacker Public Radio

    HPR4616: Thoughts about age control and further suggestions

    13/04/2026
    This show has been flagged as Explicit by the host.



    HPR EXCLUSIVE: THE INTERVIEW THAT WILL SAVE CIVILIZATION (OR AT
    LEAST YOUR KITCHEN DRAWER)







    Hopper sits down with the legendary Trollercoaster for a
    completely serious policy discussion with absolutely zero
    sarcasm whatsoever






    Tired of living in a world where ANYONE can just... open a drawer?
    Where CHILDREN can casually access spatulas and spicy condiments
    without proving their age to a licensed algorithm? Where your VCR
    doesn't run a background check before spooling up a tape?


    WELL, WORRY NO MORE.





    In this landmark interview, visionary tech policy thinker
    Trollercoaster lays out the roadmap to a safer tomorrow
    — one age-verified gunshot wound at a time. Topics covered
    include:




    System 76's courageous capitulation — actually a 5D
    chess move to manufacture the next generation of hackers by
    making Linux mildly annoying again



    Two-factor authentication for firearms — SMS-based
    triggers considered, reluctantly rejected (reception is bad at
    most crime scenes)



    GPS injections at birth — like circumcision, but for
    helicopter parents. Kids won't remember. Probably.



    Kitchen-as-a-Service (KaaS™) — powered by Microsoft,
    mandated by your government, billed monthly, support tickets
    routed to Bangalore



    Biannual maturity exams — because some people's
    brains start "deteriorating" and they end up making sarcastic
    podcasts that lawmakers might accidentally take seriously






    The liability framework is elegant in its simplicity: if anything
    bad ever happens to anyone,
    sue the manufacturer
    . Philips. IKEA. Smith & Wesson. Your kitchen. It doesn't
    matter. Someone made a thing, someone got hurt, somebody owes
    somebody lunch money.





    If you
    agree with any of this: contact your lawmakers
    immediately. They are waiting by the phone.


    If you
    disagree
    : too late, buddy. The lobbyists are already having the soup
    course.






    The lawmakers need your insights. They are, as noted, extremely
    narrow-minded.


    The comment section is open.

    Provide feedback on this episode.
  • Hacker Public Radio

    HPR4615: Clicking through an audit

    10/04/2026
    This show has been flagged as Explicit by the host.

    ISO 27001

    from Wikipedia.org:

    ISO/IEC 27001
    is an

    information security standard

    . It specifies the requirements for establishing, implementing, maintaining and continually improving an

    information security management system

    (ISMS). Organizations with an ISMS that meet the standard's requirements can choose to have it certified by an

    accredited certification body

    following successful completion of an

    audit

    .

    Information security audit

    from Wikipedia.org:

    An
    information security audit
    is an

    audit

    of the level of

    information security

    in an organization. It is an independent review and examination of system records, activities, and related documents. These audits are intended to improve the level of information security, avoid improper information security designs, and optimize the efficiency of the security safeguards and security processes.

    Factors contributing to cybersecurity fatigue

    Source: Adapted from Factors contributing to cybersecurity fatigue by L. J. J. S. (2024), Abertay University.

    Available at:

    https://rke.abertay.ac.uk/en/publications/factors-contributing-to-cybersecurity-fatigue/

    In cloud-based environments, the push for high-security standards often leads to "cybersecurity fatigue," which creates unintended psychological strain on employees.

    Constant interruptions from repetitive access requests.

    Overload of security checks and decision fatigue.

    Lack of clear understanding regarding actual cybersecurity risks.

    Impact on Behavior

    Fatigue frequently leads to negative outcomes, including the bypassing of security protocols, abandonment of necessary tasks, and total disengagement from mandatory training.

    Key Concept

    The study highlights "attitudinal fatigue" (an employee's negative mindset toward security) as a major barrier to organizational resilience and compliance.

    Strategic Recommendations:

    Transition to "contextualized training" that uses relatable, real-world scenarios.

    Streamline security workflows to minimize disruption to daily productivity.

    Develop targeted interventions.

    National Institute of Standards and Technology

    2011 Report:

    Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations

    (Tangentially
    ) related Episodes

    hpr3779 :: Just Because You Can Do a Thing...

    - Trey

    hpr0061 :: Punk Computing

    - Klattu

    hpr0002 :: Customization the Lost Reason

    - Deepgeek

    Provide feedback on this episode.
  • Hacker Public Radio

    HPR4614: Dauug|18: Faster Than a ’286, but Inspectable Like a Soroban

    09/04/2026
    This show has been flagged as Clean by the host.

    In this show, Marc Abel presents an introduction to Dauug|18, an 18-bit controller developed by The Dauug House. About the size of a postcard, Dauug|18 avoids the use of complex VLSI such as microprocessors, FPGAs, PLDs, ASICs, and DRAM. Instead, the architecture is built from trivial glue logic and synchronous static RAM, using components that can be hand-soldered and verified for connectivity after assembly.

    The motivation for Dauug|18 is to provide refuge in situations where transparency, auditability, and supply chain integrity are priorities. Rather than relying on high-integration silicon, Dauug|18 is auditable at the logic-gate level, allowing owners to verify the integrity of their hardware.

    This show covers key architectural details, the decision to use SRAM for both memory and logic, and system constraints that stem from Dauug|18's brutal simplicity, limited component selection, and succinctness. The practical effect of these constraints on programming Dauug|18 is also discussed in detail.

    Anticipated uses for Dauug|18 include privacy assertion, critical infrastructure, and curricula for fields relating to computer engineering.

    Files supplied with this show include a short PDF of Dauug|18 architectural details, as well as word-accurate, spell-checked subtitles and their matching transcript.

    More information, technical documentation, and updates on related projects like Dauug|36 can be found at https://dauug.org.
    Provide feedback on this episode.

Más podcasts de Educación

Acerca de Hacker Public Radio

Hacker Public Radio is an podcast that releases shows every weekday Monday through Friday. Our shows are produced by the community (you) and can be on any topic that are of interest to hackers and hobbyists.
Sitio web del podcast

Escucha Hacker Public Radio, Dr. Mario Alonso Puig y muchos más podcasts de todo el mundo con la aplicación de radio.es

Descarga la app gratuita: radio.es

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.8.9| © 2007-2026 radio.de GmbH
Generated: 4/15/2026 - 8:53:53 AM