To combat this inevitability, you may have noticed processors consisting of multiple cores. This is the heart of parallelization. The problem with parallelization is transforming software to best optimize this new hardware. It seems simple enough to figure out what processes can run simultaneously, but the issue arises when these processes need to access the same resources. There are different design possibilities, but I'll just give a simple example.

There are a variety of design choices to consider in solving this problem of parallelization, but the one most common that I've seen in addressing this issue is the concept of a lock and key. I have actually seen this terminology with share drives on Microsoft systems. A shared file will actually be considered locked, and changes can only be made by the user who opened it first.
In code, the terms 'lock' and 'key' can be used as syntax to reserve a block of memory for a parallel process to use. Now this solves the problem of sharing, but the challenge to programmers is optimizing the lock and key method while also making their programs be able to work in parallel as much as possible. Typically, you want to keep a block of memory open as much as possible and only lock it when it is being written to. Thus, if you are only reading memory, then everyone can read the memory.
Parallelization may not help a computer execute a line of code faster, but if that line of code can be split up into multiple lines of code, then Moore's Law in terms of raw processing power can be maintained for the foreseeable future even when transistors reach the size of atoms. If anyone is serious about programming, this will be a skill that you will have to learn.