There is very little (or no) interaction between subtasks in trivially parallel algorithms. In the example of Alice and Bob, interaction is needed to distribute the work and to combine the partial sums. However, during the computing of the individual partial sums, no interaction is needed, and the algorithm can therefore be considered trivially parallel. If the problem was different and Alice would need to interact regularly with Bob during her computations, it would be considered a tightly coupled algorithm instead. Solving sudoku in tandem with someone is an example of a tightly coupled problem.
Programming trivially parallel algorithms are typically easier, and they do not impose a high demand for the connection between the CPUs. It may even be possible to use internet-connected computers all over the world to compute something in parallel, such as in the Folding@home project where protein folding is studied using personal computers all over the world.
In the case of tightly coupled algorithms, programming is usually more complicated, and a low-latency, high-speed interconnect between the CPUs is essential for good performance. Weather simulation is a typical example of a tightly coupled problem on supercomputers, and many real-world problems fall somewhere between these two extremes.
One common way to expose parallelism is by distributing the data, for example, an array, to individual processing units.
Each processing unit (e.g., CPU core) holds a part of the data typically performing identical or very similar operations. Processing units may need to interact with each other, for example, to exchange information about data within domain boundaries. The previous example of Alice and Bob follows this scheme, as both of them control a part of the data performing similar operations on it.
Another common parallelization model is the task farm (or main/worker) approach, where one processing unit sends tasks to workers and then receives results back from them. The tasks can be computationally similar, but they can also be completely different. Often, there are more tasks than workers, and tasks are then dynamically assigned to workers.
As an example of a task farm approach, we could think of Alice, Bob, and Joe making a pizza together (video below). Alice would first give Bob the task of slicing onions, and Joe the task of chopping the ham. Alice herself could then start to prepare the dough. Once finished with the onions, Bob would pass them to Alice (she needs to interrupt her task) and, in turn, Alice could pass a new task of grating cheese back to Bob. Similarly, after passing the chopped ham to Alice, Joe would get a new task of slicing tomatoes. Finally, Alice would assemble everything together.
Task farm and data-parallel approaches can also be combined. Each worker could, for example, consist of multiple execution units, and the data related to a task then be distributed among them.