Search results
Results From The WOW.Com Content Network
More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the "flux" through the surface, is equal to the volume integral of the divergence over the region enclosed by the surface. Intuitively, it states that "the sum of all sources of the field in a region (with sinks ...
By the divergence theorem, Gauss's law can alternatively be written in the differential form: = where ∇ · E is the divergence of the electric field, ε 0 is the vacuum permittivity and ρ is the total volume charge density (charge per unit volume).
In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
As the name implies, the divergence is a (local) measure of the degree to which vectors in the field diverge. The divergence of a tensor field of non-zero order k is written as =, a contraction of a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar.
Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: = = ((+)) = (+) = (+) i.e., + = By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary:
where is the Kullback–Leibler divergence, and is the outer product distribution which assigns probability () to each (,).. Notice, as per property of the Kullback–Leibler divergence, that (;) is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. when and are independent (and hence observing tells you nothing about ).
[citation needed] This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential. If W is a vector field with curl( W ) = V , then adding any gradient vector field grad( f ) to W will result in another vector field W + grad( f ) such that curl( W + grad( f )) = V as well.
In probability theory, an -divergence is a certain type of function (‖) that measures the difference between two probability distributions and . Many common divergences, such as KL-divergence , Hellinger distance , and total variation distance , are special cases of f {\displaystyle f} -divergence.