When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Degree of parallelism - Wikipedia

    en.wikipedia.org/wiki/Degree_of_parallelism

    A program running on a parallel computer may utilize different numbers of processors at different times. For each time period, the number of processors used to execute a program is defined as the degree of parallelism. The plot of the DOP as a function of time for a given program is called the parallelism profile. [2]

  3. Karp–Flatt metric - Wikipedia

    en.wikipedia.org/wiki/Karp–Flatt_metric

    The Karp–Flatt metric is a measure of parallelization of code in parallel processor systems. This metric exists in addition to Amdahl's law and Gustafson's law as an indication of the extent to which a particular computer code is parallelized. It was proposed by Alan H. Karp and Horace P. Flatt in 1990.

  4. Analysis of parallel algorithms - Wikipedia

    en.wikipedia.org/wiki/Analysis_of_parallel...

    A so-called work-time (WT) (sometimes called work-depth, or work-span) framework was originally introduced by Shiloach and Vishkin [1] for conceptualizing and describing parallel algorithms. In the WT framework, a parallel algorithm is first described in terms of parallel rounds.

  5. Granularity (parallel computing) - Wikipedia

    en.wikipedia.org/wiki/Granularity_(parallel...

    In parallel computing, granularity (or grain size) of a task is a measure of the amount of work (or computation) which is performed by that task. [1]Another definition of granularity takes into account the communication overhead between multiple processors or processing elements.

  6. Data parallelism - Wikipedia

    en.wikipedia.org/wiki/Data_parallelism

    Data and task parallelism, can be simultaneously implemented by combining them together for the same application. This is called Mixed data and task parallelism. Mixed parallelism requires sophisticated scheduling algorithms and software support. It is the best kind of parallelism when communication is slow and number of processors is large. [7]

  7. Instruction-level parallelism - Wikipedia

    en.wikipedia.org/wiki/Instruction-level_parallelism

    Instruction-level parallelism (ILP) is the parallel or simultaneous execution of a sequence of instructions in a computer program. More specifically, ILP refers to the average number of instructions run per step of this parallel execution.

  8. Task parallelism - Wikipedia

    en.wikipedia.org/wiki/Task_parallelism

    Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks —concurrently performed by processes or threads —across different processors.

  9. Loop unrolling - Wikipedia

    en.wikipedia.org/wiki/Loop_unrolling

    Loop unrolling, also known as loop unwinding, is a loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size, which is an approach known as space–time tradeoff.