When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Linsear Write - Wikipedia

    en.wikipedia.org/wiki/Linsear_Write

    For each "easy word", defined as words with 2 syllables or less, add 1 point. For each "hard word", defined as words with 3 syllables or more, add 3 points. Divide the points by the number of sentences in the 100-word sample. Adjust the provisional result r: If r > 20, Lw = r / 2. If r ≤ 20, Lw = r / 2 - 1.

  3. Gunning fog index - Wikipedia

    en.wikipedia.org/wiki/Gunning_fog_index

    A sample test using an automated Gunning Fog calculator on a random footnote from the text (#51: Dion, vol. I. lxxix. p. 1363. Herodian, l. v. p. 189.) [9] gave an index of 19.2 using only the sentence count, and an index of 12.5 when including independent clauses. This brought down the fog index from post-graduate to high school level.

  4. Flesch–Kincaid readability tests - Wikipedia

    en.wikipedia.org/wiki/Flesch–Kincaid...

    scores 37.5 as it has 24 syllables and 13 words. While Amazon calculates the text of Moby-Dick as 57.9, [8] one particularly long sentence about sharks in chapter 64 has a readability score of −146.77. [9] One sentence in the beginning of Scott Moncrieff's English translation of Swann's Way, by Marcel Proust, has a score of −515.1. [10]

  5. Fry readability formula - Wikipedia

    en.wikipedia.org/wiki/Fry_readability_formula

    A rendition of the Fry graph. The Fry readability formula (or Fry readability graph) is a readability metric for English texts, developed by Edward Fry. [1]The grade reading level (or reading difficulty level) is calculated by the average number of sentences (y-axis) and syllables (x-axis) per hundred words.

  6. Coleman–Liau index - Wikipedia

    en.wikipedia.org/wiki/Coleman–Liau_index

    The Coleman–Liau index is a readability test designed by Meri Coleman and T. L. Liau to gauge the understandability of a text. Like the Flesch–Kincaid Grade Level, Gunning fog index, SMOG index, and Automated Readability Index, its output approximates the U.S. grade level thought necessary to comprehend the text.

  7. Levenshtein distance - Wikipedia

    en.wikipedia.org/wiki/Levenshtein_distance

    In information theory, linguistics, and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.

  8. Sentence completion tests - Wikipedia

    en.wikipedia.org/wiki/Sentence_completion_tests

    Sentence completion tests typically provide respondents with beginnings of sentences, referred to as "stems", and respondents then complete the sentences in ways that are meaningful to them. The responses are believed to provide indications of attitudes , beliefs , motivations , or other mental states .

  9. Constituent (linguistics) - Wikipedia

    en.wikipedia.org/wiki/Constituent_(linguistics)

    Tests for constituents are diagnostics used to identify sentence structure. There are numerous tests for constituents that are commonly used to identify the constituents of English sentences. 15 of the most commonly used tests are listed next: 1) coordination (conjunction), 2) pro-form substitution (replacement), 3) topicalization (fronting), 4) do-so-substitution, 5) one-substitution, 6 ...