The brain will try and understand what it can – it will use clues from the music of the voice, and what sounds it can make out, as well as knowledge of how sounds can go together (e.g. a work in English can start with ‘dw’ (as in dwell) but it can’t end in ‘dw’). The brain will also use context – how words themselves are related to try and guess what words might be there – so a sentence like ‘he caught the fish in his net’ is easier to understand if it’s very distorted, as ‘caught’ ‘fish’ and ‘net’ are all related in terms of meaning. As far as we know these processes occur for all languages, though it would be interesting to know this in more detail – as language vary a lot in the kinds of information they provide!
Sophie’s already given a great answer to this! I think it’s fascinating that we can make sense of distorted sound and actually we can often be quite successful even when the distortion is severe. Like Sophie says, it’s because our brain is using all the information it receives alongside the knowledge it already has about our language (what words are likely to come up, which sounds are less likely than others). One popular theory in neuroscience at the moment is that our brain is a big prediction machine, trying to guess what’s going to happen based on what we already know.
In some ways, even ordinary speech is distorted. Imagine I were to read out the text on this screen, and then we went back to the recording and chopped out all the times I said “the”. Listening to each of those little clips separately, you’d be amazed how different they will sound to each other, and sometimes not like “the” at all! Yet in the context of the whole recording, you probably would have the experience of hearing “the” correctly within the sentences. That’s just an example of how much work our brain does all the time to make sense of the sounds we hear in speech.