Revered_Beard
From what I understood of the article, it's not just the size (which you can get from merging previous black holes), but the combination of size, speed, and angle that are raising eyebrows.
Smash two random black holes together, and the odds are, they're spinning at different random angles. Do that a bunch of times, and unless their angles all happened to be lined up just right, the the resulting spin will be a lot slower than the maximum speed a black hole of that size can spin. But these were spinning at 80% and 90% of their max speed.
Okay, so maybe they were both "normal sized" black holes that gobbled up a lot of matter around a galactic nucleus? That might work, except then you'd expect them to both be spinning in the same direction - but they weren't.
So, none of the scientists' predictions are really matching what they actually observed. Maybe it was one of those things, maybe those models are off a bit, or maybe there's another model to explain these kinds of black holes that we just haven't thought of yet.
As an example, in Reaper, you can add a reverb effect to a section that you are looping. Then in the Render dialog, enable the "second pass render" option.
That chunk of audio that it renders, will become a perfectly seamless loop in itself. The reverb tail that would have gotten chopped off at the end of that render, will continue on with the start of the render.
At that point, if you didn't really need the beginning and end of the song, you can have that chunk of the song that seamlessly loops forever, when played on repeat.
If you are willing to do it manually, I would highly recommend using Reaper instead. Both Audacity and Reaper have learning curves to them, but Reaper has dramatically better tools for seamless transitions. You are more likely to end up with clicks and pops in Audacity (or pay a steep price in time fiddling around at the microscopic level of the waveform).
From the research paper:
a) Comparison of wt and mRFP-modified major ampullate silk fibers rolled on a capillary glass (scale bars: 550 µm).
b) Strong red fluorescence can also be seen in the major ampullate gland (scale bar: 277 µm).
c i) The genomic implementation of mRFP into the major ampullate silk was confirmed by amplifying the mRFP DNA sequence extracted from the spider's leg. Only those spiders with red fluorescent silk (scale bar: 138 µm) showed the mRFP sequence-derived signal in the agarose gel.
C ii) Total-RNA was extracted from the glands, reverse-transcribed, and subjected to R-TqPCR and a melting curve analysis showing a peak at 83°C and 87°C based on a small and a large, amplified fragment.
I think it's 100% a didgeridoo, but one that has been molded into a shape superficially resembling a saxophone.
As a longtime didg player, I can tell you that the thing that makes this absolutely worth every penny is not how light it is, the paint job, etc, but the fact that it can hit so many "hoot" notes (what they call "trumpets"), and that each hoot note is tuned to be in the same scale as the main drone.
Most didgeridoos have only one, or maybe two hoot notes, but I watched some other videos of these things being played, and I'm seeing four or five hoot notes, in addition to the main drone.
At that point, it's starting to grow beyond the realm of wind percussion instrument, into something that can play melodies.
Wow. As sometime who literally makes sound effects for a living, I'm going to have to remember this one. That was a neat effect.
I think, sometimes there are emotions that "need" to be acknowledged for what they are. When we attempt to ignore them, it only creates an emotional dissonance.
Like, if we are struggling with depression, and our emotional "background music" is a sad song in a minor key, but we try to fight it by playing happy music in a major key... Maybe one song can drown out the other, and become the new background? But more likely, we'll just end up with dissonance. The happy song we are trying to listen to will just make us feel uncomfortable as a result.
But if we listen to a sad song instead, it can resonate with, harmonize with, the emotional "background music" playing in our subconscious. The emotion itself wants to be heard and acknowledged, and by listening to a song that the emotion can synchronize with, we can help resolve the emotions as the song itself resolves.
(There's limits to that, of course - for most things, healing happens gradually in layers, so it's not like one song solves all problems, or anything like that.)
On the flip side, there was one time I was in a casual group setting, there was a big crowd of people all having various conversations, and I started playing a musical instrument softly in the background. I noticed that the song had a rather big impact on the emotional current of the group as a whole, people started speaking with a little more energy, a little more pep, a little more happiness... and when the song ended, that emotional zest faded away from the group as well.
So, context is important.
I think you actually nailed the point perfectly. Part of the social contract is that an employer will provide enough money to meet the basic needs of the employees. When the employer fails to do that, employees can feel like "wage slaves", or prisoners, who are being mistreated.
"We've had to limit our food anyway," said Valdivia. "So basically you are kind of starving us, Kaiser."
I recently produced a radio drama on what life was life before we had child labor laws, and how they came about. If you're interested, it's called "Florence Kelley, The Children's Champion."
You can kinda sorta get close using EQ, but if you really want to do it right, you'll need to get into impulse responses.
If you want a really simple, really expensive option with all the bells and whistles, then check out Speakerphone by AudioEase.
If you are on a budget, or prefer the DIY approach, you will first need a convolution reverb plug-in. It will take a recording of an impulse response (which sounds like a starter pistol), and then apply that reverb to the sounds that you wanted to apply to. If you need a free option, Reaper has a plug-in called ReaVerb that is free, and I think they have a version of that plug-in that works with other DAWs as well.
Then you'll need to search for an impulse response of a radio, and use that.
Optionally, if you really want it to sound like it's being played in a bar, find another impulse response that gives an impression of the room - what you think the bar should sound like.
You can layer them, so it sounds like it's being played from a radio, in the environment of a bar. And when done right, it will be absolutely impossible to tell whether was the real thing or simulated through plugins.
To be clear, it's not that they shoot laser beams from their feathers as some sort of mating ritual or defense mechanism (which, honestly, is probably how I would have used my own laser feathers, if I had them), but that there are strikingly identical nano structures that can reflect back a little bit of laser light, under laboratory conditions: