When.com Web Search

Search results

  1. Results From The WOW.Com Content Network
  2. Picard–Fuchs equation - Wikipedia

    en.wikipedia.org/wiki/Picard–Fuchs_equation

    This equation can be cast into the form of the hypergeometric differential equation.It has two linearly independent solutions, called the periods of elliptic functions. The ratio of the two periods is equal to the period ratio τ, the standard coordinate on the upper-half plane.

  3. Shockley diode equation - Wikipedia

    en.wikipedia.org/wiki/Shockley_diode_equation

    Later he gives a corresponding equation for current as a function of voltage under additional assumptions, which is the equation we call the Shockley ideal diode equation. [3] He calls it "a theoretical rectification formula giving the maximum rectification", with a footnote referencing a paper by Carl Wagner , Physikalische Zeitschrift 32 , pp ...

  4. List of trigonometric identities - Wikipedia

    en.wikipedia.org/wiki/List_of_trigonometric...

    A formula for computing the trigonometric identities for the one-third angle exists, but it requires finding the zeroes of the cubic equation 4x 3 − 3x + d = 0, where is the value of the cosine function at the one-third angle and d is the known value of the cosine function at the full angle.

  5. List of nonlinear ordinary differential equations - Wikipedia

    en.wikipedia.org/wiki/List_of_nonlinear_ordinary...

    Differential equations are prominent in many scientific areas. Nonlinear ones are of particular interest for their commonality in describing real-world systems and how much more difficult they are to solve compared to linear differential equations.

  6. MacCormack method - Wikipedia

    en.wikipedia.org/wiki/MacCormack_method

    In computational fluid dynamics, the MacCormack method (/məˈkɔːrmæk ˈmɛθəd/) is a widely used discretization scheme for the numerical solution of hyperbolic partial differential equations. This second-order finite difference method was introduced by Robert W. MacCormack in 1969. [1]

  7. Relaxation (iterative method) - Wikipedia

    en.wikipedia.org/wiki/Relaxation_(iterative_method)

    Relaxation methods are used to solve the linear equations resulting from a discretization of the differential equation, for example by finite differences. [2] [3] [4] Iterative relaxation of solutions is commonly dubbed smoothing because with certain equations, such as Laplace's equation, it resembles repeated application of a local smoothing ...

  8. Brent's method - Wikipedia

    en.wikipedia.org/wiki/Brent's_method

    Suppose that we want to solve the equation f(x) = 0. As with the bisection method, we need to initialize Dekker's method with two points, say a 0 and b 0, such that f(a 0) and f(b 0) have opposite signs. If f is continuous on [a 0, b 0], the intermediate value theorem guarantees the existence of a solution between a 0 and b 0.

  9. Polynomial interpolation - Wikipedia

    en.wikipedia.org/wiki/Polynomial_interpolation

    For example, given a = f(x) = a 0 x 0 + a 1 x 1 + ··· and b = g(x) = b 0 x 0 + b 1 x 1 + ···, the product ab is a specific value of W(x) = f(x)g(x). One may easily find points along W(x) at small values of x, and interpolation based on those points will yield the terms of W(x) and the specific product ab. As fomulated in Karatsuba ...