정규 분포

Normal distribution
정규 분포
확률밀도함수
Normal Distribution PDF.svg
빨간색 곡선은 표준 정규 분포입니다.
누적분포함수
Normal Distribution CDF.svg
표기법
파라미터 R\ \ \ {} 평균(위치)
2 0 \ \ display ^ {\ \ {} _ { 0} = 분산 (표준 척도)
지지하다
PDF
CDF
양분위수
의미하다
중앙값
모드
분산
미친
왜도
예: 첨도
엔트로피
MGF
CF
피셔 정보

쿨백-라이블러 발산

통계학에서 정규 분포(가우스, 가우스 또는 라플라스-가우스 분포라고도 함)는 실수치 랜덤 변수의 연속 확률 분포의 한 종류입니다.확률 밀도 함수의 일반적인 형태는 다음과 같습니다.

μ {\ 분포의 평균 또는 기대치( 중앙값과 모드)이며 파라미터 {\ \ 분포의 표준 편차입니다.분포의 분산은 ^{[1] 이며, 가우스 분포를 갖는 랜덤 변수는 정규 분포라고 하며 정규 편차라고 합니다.

정규 분포는 통계에서 중요하며 자연사회 과학에서 분포를 [2][3]알 수 없는 실제 값 랜덤 변수를 나타내기 위해 자주 사용됩니다.이들의 중요성은 부분적으로 중심 한계 정리 때문이다.일부 조건에서, 유한 평균과 분산을 갖는 랜덤 변수의 많은 표본(관측치)의 평균은 그 자체로 랜덤 변수이며, 표본의 수가 증가함에 따라 분포가 정규 분포로 수렴된다는 것을 명시하고 있다.따라서 측정 오류와 같이 많은 독립 공정의 합이 될 것으로 예상되는 물리적 수량은 정규 [4]분포에 가까운 분포를 갖는 경우가 많습니다.

더욱이, 가우스 분포는 분석 연구에서 중요한 몇 가지 고유한 특성을 가지고 있다.예를 들어, 정규 편차 고정 집합의 선형 조합은 정규 편차입니다.불확실성의 전파최소 제곱 매개변수 적합과 같은 많은 결과와 방법은 관련 변수가 정규 분포를 따를 때 명시적인 형태로 분석적으로 도출될 수 있다.

정규 분포를 비공식적으로 종곡선이라고 [5]부르기도 합니다.그러나 다른 많은 분포는 종 모양입니다(예: Cauchy, Student's t 로지스틱 분포).

일변량 확률 분포는 다변량 정규 분포의 벡터와 행렬 정규 분포의 행렬에 대해 일반화됩니다.

정의들

표준 정규 분포

가장 간단한 정규 분포의 경우를 표준 정규 분포 또는 단위 정규 분포라고 합니다.이는 μ = \)인 특수한 경우로, 다음과 같은 확률 밀도 함수(또는 밀도)로 설명됩니다.

z(\ z 평균은 0이고 분산 및 표준 편차는 1입니다.밀도 ( ){ z 에서 1/ { 1}}이고 z + (\ z - (\ z= 1)입니다.

위의 밀도는 표준 정규 분포로 가장 일반적으로 알려져 있지만, 일부 저자들은 정규 분포의 다른 버전을 설명하기 위해 이 용어를 사용했습니다.를 들어, Carl Friedrich Gauss는 표준 노멀을 다음과 같이 정의했다.

Stephen Stigler[6] 표준 노멀을 다음과 같이 정의했습니다.

2 / ( 2 )\ \ {2} =/ ( \ )

일반 정규 분포

모든 정규 분포는 표준 정규 분포의 버전입니다. 이 정규 분포의 도메인은 {\displaystyle }( 편차)만큼 연장되고 다음으로μ {\}(평균값)로 변환됩니다.

확률 밀도는 적분이 1이 되도록 1/{\(\ 1 스케일링해야 합니다.

Z Z 표준 정규 편차인 Z +μ(\X=\ Z는 기대값(\displaystyle ) 표준 편차(\displaystyle \displaystyle로 정규 Z(\를 나타냅니다. Z는 {\ 배수로 스케일링/신장할 수 있으며 {\라고 다른 정규 분포를 얻을 수 있습니다. 반대로X {\ Xμ {\ 2 \의 정규 ^{(\ X 분포는 Z ( -) / \ Z = ( - \) /\display} 에 따라 다시 변환하여 "표준" 정규 분포로 변환할 수 있습니다.이 변형을 된 형태라고도

표기법

표준 가우스 분포(평균 및 단위 분산이 0인 표준 정규 분포)의 확률 밀도는 그리스 문자 {\}(phi)[7]로 종종 표시됩니다.그리스 문자 phi의 대체 형태인 phi도 꽤 자주 사용된다.

정규 분포는 N, 2){ N^{ N, 2) mu ^{[8]이라고 .따라서 랜덤 X(\ X μ(\ 표준 편차(\로 정규 분포되어 있으면 다음과 같이 쓸 수 있습니다.

대체 파라미터화

일부 저자는 편차 또는 대신 "\ 분포의 폭을 정의하는 파라미터로 사용할 것을 권장합니다정밀도는 보통 분산의 1/ 1^{[9]로 정의됩니다.분포의 공식은 다음과 같습니다.

이 선택은 0에 매우 가까울 때 수치 계산에서 유리하며 다변량 정규 분포를 갖는 변수의 베이지안 추론 등 일부 컨텍스트에서 공식을 단순화할 수 있다고 주장됩니다.

또는 표준편차 / { \^ { \ } =/ \ 역수를 정밀도로 정의할 수 있으며, 이 경우 정규분포의 표현은

스티글러에 따르면, 이 공식은 훨씬 간단하고 기억하기 쉬운 공식과 분포의 분위수에 대한 간단한 근사 공식 때문에 유리합니다.

정규 분포는 자연 모수 1 μ 2 \ \_ { { } { } { \ _ }} 2 2、 2 2 、 、 、 - 、 、 、 、 、 、 、 、 stat normal normal normalstylestylestyle normal normal 、 、 、 、 、 、 、 、 、 normal normal 2 2 normal normal정규 분포에 대한 이중 기대 모수는 θ1 = μ 및 θ22 = μ + µ입니다2.

누적분포함수

표준 정규 분포의 누적 분포 함수(CDF)는 보통 대문자 그리스 문자(\로 표시되며 적분이다.

관련 오류 함수 () { } ( 은 정규 분포가 [- , \ style [ - , x 의 범위에 들어가는 랜덤 변수의 확률을 나타냅니다.다음은 예를 제시하겠습니다.

이러한 적분은 기본 함수로 표현될 수 없으며, 종종 특수 함수라고 한다.그러나 많은 수치 근사치가 알려져 있습니다.자세한 내용은 아래를 참조하십시오.

두 기능은 밀접하게 관련되어 있습니다.즉,

f {\f μ {\ 편차 {\의 일반 정규 분포의 경우 누적 분포 함수는 다음과 같습니다.

표준 일반 CDF의 Q ( ) - () {Q (x) =- \는 특히 엔지니어링 [10][11]텍스트에서 Q-함수라고 불립니다.그것은 확률은 표준 정상적인 무작위 변수 X의{X\displaystyle}){\displaystyle)}:P(X>)){P(X>^)\displaystyle}. Q{Q\displaystyle}-function의 Φ{\displaystyle \Phi}의 이러한 간단한 변화 다른 정의는,,를 초과한 값 또한 사용은 O를 준다ccasi온리온으로[12]

표준 정규 CDF의 그래프(\ \Phi})는 점(0,1/2)을 중심으로 2배의 회전 대칭을 갖습니다. 즉,( - ) - ( ) {(x )- \( x ) 。 반감시적분할 수 있습니다.

표준 정규 분포의 CDF는 부품별로 시리즈로 통합함으로써 확장할 수 있습니다.

어디 {\}은는) 이중 계수를 나타냅니다.

x에 대한 CDF의 점근팽창도 부품별 적분을 사용하여 도출할 수 있다.자세한 내용은 오류 함수 #점근 [13]확장을 참조하십시오.

Taylor 계열 근사치를 사용하면 표준 정규 분포의 누적분포함수에 대한 빠른 근사치를 구할 수 있습니다.

표준편차와 적용범위

정규 분포의 경우 평균으로부터 1개 미만의 표준 편차가 세트의 68.27%를 차지하며, 평균으로부터 2개의 표준 편차가 95.45%를 차지하며, 3개의 표준 편차가 99.73%를 차지합니다.

정규 분포에서 추출한 값의 약 68%는 평균으로부터 1개의 표준 편차 θ 내에 있고, 값의 약 95%는 2개의 표준 편차 내에 있으며, 약 99.7%는 3개의 표준 [5]편차 내에 있습니다.이 사실은 68-95-99.7(emirical) 규칙 또는 3-시그마 규칙이라고 불립니다.

보다 정확히 말하면, 정규 편차가 - n { \ - n \ } ~ { + \ } 범위에 있을 확률은 다음과 같습니다.

유효 숫자 에서 n 1,, 6 n 2,\ldots 은 다음과 같습니다.[14]

OEIS
1 0.682689492137 0.317310507863
3개 .15148718753
OEIS: A178647
2 0.95499736104 0.045500263896
스물한 살 .9778945080
OEIS: A110894
3 0.997300203937 0.002699796063
370 .398347345
OEIS: A270712
4 0.99936657516 0.000063342484
15787 .192673
5 0.9999426697 0.000000573303
1744277 .89362
6 0.999998027 0.0000001973
506797345 .897

n(\ n의 경우 약 - e - 2 / / { \ \ { ^ { - n ^ { - n ^ { } / 2 } { \ { / 2 } 。

분위수 함수

분포의 분위수 함수는 누적 분포 함수의 역함수입니다.표준 정규 분포의 분위수 함수를 프로빗 함수라고 하며, 역오차 함수로 나타낼 수 있습니다.

이 2인 일반 랜덤 변수의 경우, 분위수 함수는 다음과 같습니다.

표준 정규 분포의 - ( ^{-은 일반적으로 p\로 표시됩니다.이 값은 가설 검정, 신뢰 구간 구성 및 Q–Q 그림에 사용됩니다.일반 랜덤 X X 확률 - 의 μ {\(\displaystyle \ 하며, 2μ ± {\(\displaystyle 2p)를 벗어납니다. z 1.96입니다.따라서 일반 랜덤 변수는 5%의 경우에 한해 ± 1.({display 벗어납니다.

다음 표는 X X 지정된 p(\ pμ ± †(\\pm })에 하도록 계량 (\ })를 나타냅니다.이러한 값은 정규 분포(또는 점근 정규 분포)[15][16]가 있는 표본 평균 및 기타 통계 추정기에 대한 공차 구간을 결정하는 데 유용합니다. 표는 - (p ) -1 ( + 2 { { - ( p ) \^{- 위와 같이 - ^{- 아닙니다

0.80 1.281551565545 0.999 3.290526731492
0.90 1.644853626951 0.9999 3.890591886413
0.95 1.959963984540 0.99999 4.417173413469
0.98 2.326347874041 0.999999 4.891638475699
0.99 2.575829303549 0.9999999 5.326723886384
0.995 2.807033768344 0.99999999 5.730728868236
0.998 3.090232306168 0.999999999 6.109410204869

p { p의 경우, 분위함수는 유용한 점근팽창 - ( ) - - ( - +( . \ \ Phi { - 1= - { - { \ { p } { p } { p } { p } { p } { p } } { } { p } { p } { p } { p } } } 。[17]

정규 분포는 처음 두 개를 초과하는 누적량(평균 및 분산 제외)이 0인 유일한 분포입니다.또한 지정된 평균 및 [18][19]분산에 대해 최대 엔트로피를 갖는 연속 분포입니다.Geary는 평균과 분산이 유한하다고 가정할 때 일련의 독립적 도면으로부터 계산된 평균과 분산이 [20][21]서로 독립적인 유일한 분포임을 보여 주었습니다.

정규 분포는 타원 분포의 하위 클래스입니다.정규 분포는 평균에 대해 대칭이며 실제 선 전체에서 0이 아닙니다.따라서 개인의 체중이나 주식가격과 같이 본질적으로 긍정적이거나 강하게 치우친 변수에는 적합하지 않을 수 있다.이러한 변수는 로그 정규 분포 또는 파레토 분포와 같은 다른 분포로 더 잘 설명할 수 있습니다.

x(\x)가 평균에서 몇 가지 표준 편차 이상 떨어져 있을 경우 정규 분포의 값은 사실상 0입니다(예: 3개의 표준 편차가 분포의 0.27%를 제외한 모든 범위에 해당).따라서 특이치의 유의한 부분, 즉 평균으로부터 많은 표준 편차가 떨어져 있는 값, 정규 분포 변수에 최적인 최소 제곱 및 기타 통계적 추론 방법이 이러한 데이터에 적용될 때 종종 매우 신뢰할 수 없을 것으로 예상하는 경우에는 적절한 모형이 아닐 수 있습니다.그러한 경우, 보다 엄격한 분포를 가정하고 적절한 강력한 통계 추론 방법을 적용해야 한다.

가우스 분포는 평균 또는 분산이 유한한지 여부에 관계없이 동일한 분포의 합계를 끌어당기는 안정적인 분포군에 속합니다.제한 사례인 가우스를 제외하고, 모든 안정적인 분포는 무거운 꼬리와 무한 분산을 가집니다.안정적이고 분석적으로 표현될 수 있는 확률 밀도 함수를 가진 몇 안 되는 분포 중 하나이며, 다른 분포는 코시 분포레비 분포입니다.

fμ\ 의 정규분포에는 다음 속성이 있습니다.

  • 분포의 모드,[22] 중위수평균인 점 x , {\ x=\ 둘레는 대칭입니다.
  • 단언할 수 없습니다. 첫 번째 도함수는 <,> ,\ x \, 0 입니다 x = \ }
  • 곡선과 x 축으로 둘러싸인 영역은 통일성(즉, 1과 동일)입니다.
  • 첫 번째 도함수는 ( ) - - μ f () .{ { \ } ( x ) = - { \ { - \ } { \ f ( x ) f ( ) ( x ) 。
  • 밀도는 2차 도함수가 이고 부호가 변경됨)이 있으며, x μ -μ - ( \ x = \- \ x + .{ x = \ + \ }에서 평균으로부터 한 표준 편차가 있습니다.
  • 그 밀도는 통나무 오목하다.[22]
  • 그 밀도는 무한히 미분 가능하며, 정말로 [23]2차 초평활화이다.

또한 표준 정규 분포의 밀도 0} = 1({ 1)도 다음과 같은 을 가진다.

  • 첫 번째 도함수는 ) - ( x。{ \^{\prime }(\varphi이다.
  • 두 번째 도함수는 () ( 2 -) ( ^{\ \}(x)=(입니다.
  • 일반적으로 n번째 도함수는 ()( -) n)、 \ \ { ( n )( x )^{ } = { n } \{ } { ) (
  • μ({\mu 갖는 정규 분포 X({ \displaystyle\displaystyle \})가 특정 집합에 있을 확률은 Z -μ / { Z=( 정규 분포를 갖는다는 사실을 사용하여 계산할 수 있습니다.

★★★

XX})의 플레인 모멘트와 절대 모멘트는 각각 p({ X}) p({^{p의 기대치입니다.μ(\ 0인 경우 이러한 파라미터를 중심 모멘트라고 하며, 그렇지 않은 경우 이러한 파라미터를 비중심 모멘트라고 합니다.일반적으로 관심 있는 것은 정수 가 p인 순간뿐입니다.

X X 정규 분포를 갖는 , 중심 모멘트가 아닌 모멘트가 존재하며 실제 부분이 -1보다 큰 p디스플레이 p에 대해 유한합니다.음이 아닌 p {\ p의 경우 중심 모멘트는 다음과 같습니다.[25]

n {\ n}은는) ndisplaystylen.와) 패리티를 가진 n.부터1까지의 모든 의 곱을 나타냅니다.

중심 절대 모멘트는 모든 짝수 순서의 일반 모멘트와 일치하지만 홀수 순서의 경우 0이 아닙니다.음수가 아닌 p의 경우 {\ p

마지막 공식 또한 어떤non-integer p>;− 1.{\displaystyle p>, -1 유효합니다.는 언제μ ≠ 0말},{\displaystyle \mu \neq 0,}은 평야 그리고 절대적인 순간 1F1{\displaystyle{}_{1}F_{1}합류하는 초 기하 기능 면에서}와 U.{미국\displaystyle}[표창 필요한]표현될 수 있다.


이러한 표현은p\p가 정수가 아닌 에도 유효합니다.일반화 Hermite 다항식을 참조하십시오.

★★★ 모멘트 심심 central central central central central
1
2
3
4
5
6
7
8

XX})가 [bdisplaystyle b])에 있는 를 조건으로 하는 X})의 예상은 다음과 같습니다.

f {\와 F {\ F는 각각 X의 밀도 및 누적 분포 함수입니다. b {\ b=\ 경우 이를 역밀스 비율이라고 합니다.위에서는밀도 f({X 인버스 밀스 비율과 같이 표준 표준 밀도 대신 사용되므로, 여기서는 \ 아닌 ({displaystyle \sigma^{2})로 표시됩니다.

표준편차θ[26] 갖는 f(\ f 푸리에 변환은 다음과 같습니다.

서 ii는 가상 단위입니다. 0 \ 0인 경우 첫 번째 계수는 1이고, 푸리에 변환은 상수 계수를 제외하고 주파수 영역의 정규 밀도이며, 평균 0 및 1/θ 1 입니다. 특히 표준 정규 분포(\})는 고유푸입니다.푸리에 변환의 음질

확률론에서, 실수치 확률 분포의 푸리에 변환은 로 정의되는 변수 특성 함수 X () \ \ { ( )에 밀접하게 관련되어 있다.s t t푸리에 변환의 주파수 파라미터)의 함수입니다.이 정의는 복합값 t[27]t로 해석적으로 확장할 수 있습니다.둘 사이의 관계는 다음과 같습니다.

및 함수

실수 랜덤 X의 모멘트 생성 함수({X})는 실수 t t의 함수로서 t({ e입니다.f{ f의 정규 분포에서는μ의 ({muyle f 편차)입니다 모멘트 생성 함수가 존재하며 다음과 같습니다.

적분 생성 함수는 모멘트 생성 함수의 로그, 즉

은 t{\ t의 2차 다항식이므로 첫 번째 2개의 누적수( \ 분산 2 {\^{만이 0이 아닙니다.

Stein의 방법에서 무작위 변수X ~2)의 Stein 연산자와 클래스는 ( ) f () -( - ) () \ { { ( x ) 。절대 연속 f: ( [ () < \ f : \ \ \ {} ( ) } \ { } [ ' ( ) ] < \

바리안스

{f 한계에서는 확률밀도 는 최종적으로 x μ(\ x\neq에서 0이 되는 경향이 있지만, (\=\mu의 적분이 1이면 제한 없이 증가합니다.따라서 정규 분포는 { \ }일 때 정규 분포로 정의할 수 없습니다.

단, 분산이 0인 정규분포를 일반화함수로 정의할 수 있습니다.구체적으로는 Dirac의 "" { - )로 변환된 f) ( -μ f - mu CDF입니다.μ(\

대대 maximum maximum

지정된 μ \mu 및 분산 2 {\ \sigma}의 모든 확률 분포 정규 N , 2)(\[28] N^{ 엔트로피가 최대인 분포입니다.X X 확률 f { f연속 랜덤 변수인 X X 엔트로피는 다음과 같이 정의됩니다[29][30][31].

f ( ) logf ( f ff ( { f)=일 때 항상 0으로 간주됩니다.분포가 적절하게 정규화되어 있고 지정된 분산을 갖는다는 제약조건에 따라 이 함수를 최대화할 수 있습니다.두 개의 Lagrange 승수를 가진 함수가 정의됩니다.

서 f { f 현재 displaystyle\를 갖는 밀도함수로 간주됩니다.

엔트로피가 최대일 때 f 대한 작은 변동 f(x)\delta fL(\ L 대한 변동 L L 생성합니다.

이 값은 임의의 fx)\ fx)\delta f(x f 해결하면 다음과 같이 됩니다.

_ })에 대해 푸는 제약 방정식을 사용하면 정규 분포의 밀도를 얻을 수 있습니다.

정규 분포의 엔트로피는 다음과 같습니다.

  1. 일부 변수 특성 X( \ \{X } ) ( ( { X \ ^ { ( )서 Q ( )는 다항식입니다. Q 최대 2차 다항식이므로 X 정규 랜덤 변수입니다.[32]이 결과의 결과는 정규 분포가 0이 아닌 적분량의 유한한 수(2)를 갖는 유일한 분포가 된다는 것입니다.
  2. X Y Y 함께 정상이고 상관 관계가 없는 경우 독립적입니다.X디스플레이 X Y Y 함께 정상이어야 요건은 필수적입니다. X( 스타일 X)와 Y(디스플레이 스타일 Y)가 없으면 속성이 [33][34][proof]유지되지 않습니다.비정규 랜덤 변수의 경우 상관 관계가 없는 것은 독립성을 의미하지 않습니다.
  3. 1개의 정규 1μ 1, Kullback-Leibler 발산(\ N _ _}^ 2, 2)됩니다.
    동일한 분포 간의 Hellinger 거리는 다음과 같습니다.
  4. 정규 분포에 대한 Fisher 정보 행렬은 대각선이며 다음 형식을 취합니다.
  5. 정규 분포의 평균 앞에 있는 켤레도 정규 [36]분포입니다.구체적으로는 1, n {\ ~ (, \ N (\, \sigma 이고, 그 μ ~ 0 , \ 2, \(\ N (\disma ) _0 _0}, \ \sigma, \sigma, \sigma, \sigma)인 경우.u (는)
  6. 정규 분포군은 지수족(EF)을 형성할 뿐만 아니라 2차 분산 함수(NEF-QVF)를 가진 자연 지수족(NEF)을 형성합니다.정규 분포의 많은 속성은 NEF-QVF 분포, NEF 분포 또는 EF 분포의 속성으로 일반화됩니다.NEF-QVF 분포는 포아송, 감마, 이항 분포 및 음의 이항 분포를 포함하여 6개의 군으로 구성되며 확률과 통계에서 연구되는 많은 일반 군들은 NEF 또는 EF입니다.
  7. 정보 기하학에서 정규 분포 패밀리는 일정곡률 - -1)의 통계적 다지관을 형성합니다.같은 패밀리는 (±1)-연결 () \ ( ^{([37]에 대해 평탄합니다.

★★★★

한계

합니다.
n nna}{displaystyle na}}{displaystyle na}}의 합에 대한 p)의 비교는 중심한계정리에 n na가 증가하는 정규분포로의 수렴을 나타낸다.오른쪽 아래 그래프에서 이전 그래프의 평활화된 프로파일은 크기가 조정되고 중첩되며 정규 분포(검은색 곡선)와 비교됩니다.

중심 한계 정리는 특정(공통) 조건에서 많은 랜덤 변수의 합이 거의 정규 분포를 갖는다는 것을 나타냅니다.으로는 X, (\ 독립적이고 동일한 분포의 랜덤 변수이며, 평균은 0, 분산 2 \})이며,(\ Z는 ndisplaystyle {qrt})로

그 후 n n 하면 Z Z 확률분포는 평균과 분산이 0인 정규분포 2(\^{가 됩니다.

이 정리는 의존도 및 분포의 모멘트에 특정 제약이 가해지는 경우 독립적이지 않거나 동일한 분포가 아닌 변수로 확장할 수 있습니다.

실제로 접한 많은 검정 통계, 점수추정기에는 특정 랜덤 변수의 합계가 포함되어 있으며, 영향 함수를 사용하여 더 많은 추정치를 랜덤 변수의 합계로 나타낼 수 있다.중심 한계 정리는 이러한 통계 모수가 점근 정규 분포를 가질 것임을 암시합니다.

는 특정 수 를 들어 다음과 같습니다. 예를 들어 다음과 같습니다.

  • B p np(\displaystyle np(에 대해 (\ npnp())로 거의 정규 분포를 따릅니다.
  • 포아송 분포[38] 큰 값에 대한 \lambda "\displaystyle \lambda에서 거의 정규 분포를 따릅니다.
  • 카이 제곱 분포 ( k )\ \ ^ { 2) 。k \ kvariance 2 k, 。
  • Student' t t는) 평균이 0이고 분산이 1인 경우, the { }이(가) 클 경우)

이러한 근사치가 충분히 정확한지 여부는 그러한 근사치가 필요한 목적과 정규 분포에 대한 수렴 속도에 따라 달라집니다.일반적으로 이러한 근사치가 분포의 꼬리 부분에서 정확도가 떨어지는 경우가 있습니다.

중심 한계 정리의 근사 오차에 대한 일반적인 상한이 Berry-Esseen 정리에 의해 주어지고 근사치의 개선은 Edgeworth 확장에 의해 주어집니다.

이 정리는 많은 균일한 노이즈 소스의 합을 가우스 노이즈로 모델링하는 데에도 사용할 수 있습니다.'AWGN' 참조.

a: 정규변수의 cos \2})의 확률밀도= - 2 \=-2) 3 (\ \}의 xy (\ x})의 확률밀도. b: x {\ x 2 (\displaystyle x }) x {_{x}= y {_}= x 0 y .2{ _}= ) 、 、 。 개의 상관된 변수x 두 함수 중 yx- (\_} ), 5(\ \ \ ) 。 x y \495}). 함수의 확률 밀도 i= 1 x ( \ \}^{vert}\ } / 4 i 。이것들은 레이트레이싱의 [39]수치적 방법으로 계산된다.

하나 이상의 독립적이거나 상관된 정규 변수의 함수에 대한 확률 밀도, 누적 분포 및 역 누적 분포는 광선[39] 추적의 수치 방법(Matlab 코드)을 사용하여 계산할 수 있습니다.다음 섹션에서는 몇 가지 특별한 경우를 살펴보겠습니다.

X X 분산이 ^{,

  • X+ (\ b (\ 에 대해서도, + b (\+ b 편차 a(\ a의 정규 분포가 됩니다. 즉, 선형 변환에서는 정규 분포 패밀리가 닫힙니다.ons(온스.
  • ({X})는 로그 정규 분포를 따릅니다. eX ~ ln(N(μ, δ2)).
  • X X 절대값이 정규 분포 X~Nf(μ, θ2)접었습니다.μ \ 0)인 경우 이를 반정규 분포라고 합니다.
  • 정규화 잔차의 절대값인 X - μ / μ는 X- / ~ 1 {\ _의 자유도를 갖는 카이 분포를 가진다.
  • X/mu제곱도 X 2 / - 2~ ( 2/ 2 X _}({mu 비중심 카이-mu 분포입니다. (\
  • 정규 x x 로그 우도는 확률 밀도 함수의 로그일 뿐입니다.
    이 값은 표준 정규 변수의 스케일링 및 시프트된 제곱이므로 스케일링 및 시프트된 카이 제곱 변수로 분포됩니다.
  • [a, b] 구간으로 제한된 변수 X의 분포를 잘린 정규 분포라고 합니다.
  • (X - −2μ)는 위치 0 및 척도 θ−2 Levy 분포를 가진다.
의 된 정규
  • 1 2 2개의 독립된 정규 랜덤 변수이며, 1({\ 는 1 {2})입니다+ }+ 정규 [proof]분포를 따릅니다. +({}+\_}}, 분산은 + _입니다.
  • 특히 X X Y Y 평균과 이 0인 독립 정규 편차 with 2스타일 \^{X +( XX -( 스타일 평균과 2( 스타일 X-Y)의 독립 정규 분포입니다.{\ 2^{ 이것은 편광 [40]정체성의 특별한 경우입니다
  • 1 2 편차(\displaystyle \sigma를 갖는 2개의 독립된 정규 편차이며 (\ ab(\ b 임의의 실수인 변수,
    또한 μ(\ 편차 displaystyle)를 사용하여 정규 분포도 안정적입니다(지수 2 \ \ ) 。
의 정규

1 X({ 평균 0과 분산 1을 갖는 두 개의 독립된 표준 정규 랜덤 변수인 ,

  • 이들의 합과 차이는 평균 0과 분산 2: X ± ~N ( , X_N(으로 정규 분포를 따릅니다.
  • Z X ({ Z =_ {_ { )는 밀도 f ( ) - 1 () = \^ { - 1_ 0 ( )K _ { 0 (z} ( k _ )의 분포[41] 따릅니다.이 분포는 z 0(\ z에서 이며, 특성 함수 Z ( ) ( 2) - / _)= (2})^{-2를 가지고 있습니다.
  • 이 비율은 표준 Cauchy 분포.1/2 ~ ⁡ ( ,) { } / (0, 1)
  • 유클리드 1 + ({ 레일리 분포를 가진다.

의 독립된 에 대한

  • 독립적인 정규 편차의 선형 조합은 정규 편차입니다.
  • 1, 2, n({ 독립 표준 정규 랜덤 변수인 , 제곱합은 n n 자유도의 제곱 분포를 가집니다.
  • 수단과 만약 X1, X2,…, Xn{\displaystyle X_{1},X_{2},\ldots ,X_{n}}독립심 일반적으로 분산된 확률 변수μ{\displaystyle \mu}와 변동이}서, 자신들의 샘플 평균 사용함을 입증할 수 있는 건 그 견본 표준 deviation,[42]에 상관이 없다 2{\displaystyle \sigma ^{2}σ. 바수의 정리 또는 코크란의 [43]정리입니다.이 두 수량의 비율은 자유도가 학생의 t-분포를 갖는다.
  • X_ , (\},},\m})이 독립된 정규 랜덤 변수일 정규화합계의 비율은 다음과 같습니다.

상관된 여러 정규 변수에 대한 연산

  • 정규 벡터의 2차 형식, 즉 복수의 독립적 또는 상관된 정규 변수의 함수 x +x + { q = \}^2} + \ + c}는 일반화 카이-제곱 변수이다.

밀도 함수에 대한 연산

분할 정규 분포는 서로 다른 정규 분포의 밀도 함수의 축척 단면을 결합하고 밀도를 재스케일링하여 하나로 통합하는 측면에서 가장 직접적으로 정의됩니다.잘린 정규 분포는 단일 밀도 함수의 단면을 재스케일링한 결과입니다.

무한 나눗셈과 크라메르의 정리

임의의 양의 n(\ 대해 μ 및displaystyle})인 정규분포는 각각 n(\ 독립 정규편차의합이 됩니다.} 및 {^{ {n입니다이 성질을 무한 [45]나눗셈이라고 합니다.

Conversely, if and are independent random variables and their sum has a normal distribution, then both and must be normal deviates.[46]

This result is known as Cramér's decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely.[32]

Bernstein's theorem

Bernstein's theorem states that if and are independent and and are also independent, then both X and Y must necessarily have normal distributions.[47][48]

More generally, if are independent random variables, then two distinct linear combinations and will be independent if and only if all are normal and , where denotes the variance of .[47]

Extensions

The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.

A random variable X has a two-piece normal distribution if it has a distribution

where μ is the mean and σ1 and σ2 are the standard deviations of the distribution to the left and right of the mean respectively.

The mean, variance and third central moment of this distribution have been determined[49]

where E(X), V(X) and T(X) are the mean, variance, and third central moment respectively.

One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are:

  • Pearson distribution — a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values.
  • The generalized normal distribution, also known as the exponential power distribution, allows for distribution tails with thicker or thinner asymptotic behaviors.

Statistical inference

Estimation of parameters

It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample from a normal population we would like to learn the approximate values of parameters and . The standard approach to this problem is the maximum likelihood method, which requires maximization of the log-likelihood function:

Taking derivatives with respect to and and solving the resulting system of first order conditions yields the maximum likelihood estimates:

Sample mean

Estimator is called the sample mean, since it is the arithmetic mean of all observations. The statistic is complete and sufficient for , and therefore by the Lehmann–Scheffé theorem, is the uniformly minimum variance unbiased (UMVU) estimator.[50] In finite samples it is distributed normally:

The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix . This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of is proportional to , that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations.

From the standpoint of the asymptotic theory, is consistent, that is, it converges in probability to as . The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples:

Sample variance

The estimator is called the sample variance, since it is the variance of the sample (). In practice, another estimator is often used instead of the . This other estimator is denoted , and is also called the sample variance, which represents a certain ambiguity in terminology; its square root is called the sample standard deviation. The estimator differs from by having (n − 1) instead of n in the denominator (the so-called Bessel's correction):

The difference between and becomes negligibly small for large n's. In finite samples however, the motivation behind the use of is that it is an unbiased estimator of the underlying parameter , whereas is biased. Also, by the Lehmann–Scheffé theorem the estimator is uniformly minimum variance unbiased (UMVU),[50] which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator is "better" than the in terms of the mean squared error (MSE) criterion. In finite samples both and have scaled chi-squared distribution with (n − 1) degrees of freedom:

The first of these expressions shows that the variance of is equal to , which is slightly greater than the σσ-element of the inverse Fisher information matrix . Thus, is not an efficient estimator for , and moreover, since is UMVU, we can conclude that the finite-sample efficient estimator for does not exist.

Applying the asymptotic theory, both estimators and are consistent, that is they converge in probability to as the sample size . The two estimators are also both asymptotically normal:

In particular, both estimators are asymptotically efficient for .

Confidence intervals

By Cochran's theorem, for normal distributions the sample mean and the sample variance s2 are independent, which means there can be no gain in considering their joint distribution. There is also a converse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between and s can be employed to construct the so-called t-statistic:

This quantity t has the Student's t-distribution with (n − 1) degrees of freedom, and it is an ancillary statistic (independent of the value of the parameters). Inverting the distribution of this t-statistics will allow us to construct the confidence interval for μ;[51] similarly, inverting the χ2 distribution of the statistic s2 will give us the confidence interval for σ2:[52]

where tk,p and χ 2
k,p
are the pth quantiles of the t- and χ2-distributions respectively. These confidence intervals are of the confidence level 1 − α, meaning that the true values μ and σ2 fall outside of these intervals with probability (or significance level) α. In practice people usually take α = 5%, resulting in the 95% confidence intervals.

Approximate formulas can be derived from the asymptotic distributions of and s2:

The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles zα/2 do not depend on n. In particular, the most popular value of α = 5%, results in z0.025 = 1.96.

Normality tests

Normality tests assess the likelihood that the given data set {x1, ..., xn} comes from a normal distribution. Typically the null hypothesis H0 is that the observations are distributed normally with unspecified mean μ and variance σ2, versus the alternative Ha that the distribution is arbitrary. Many tests (over 40) have been devised for this problem. The more prominent of them are outlined below:

Diagnostic plots are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis.

  • Q–Q plot, also known as normal probability plot or rankit plot—is a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it's a plot of point of the form (Φ−1(pk), x(k)), where plotting points pk are equal to pk = (kα)/(n + 1 − 2α) and α is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line.
  • P–P plot – similar to the Q–Q plot, but used much less frequently. This method consists of plotting the points (Φ(z(k)), pk), where . For normally distributed data this plot should lie on a 45° line between (0, 0) and (1, 1).

Goodness-of-fit tests:

Moment-based tests:

  • D'Agostino's K-squared test
  • Jarque–Bera test
  • Shapiro–Wilk test: This is based on the fact that the line in the Q–Q plot has the slope of σ. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly.

Tests based on the empirical distribution function:

Bayesian analysis of the normal distribution

Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered:

  • Either the mean, or the variance, or neither, may be considered a fixed quantity.
  • When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified.
  • Both univariate and multivariate cases need to be considered.
  • Either conjugate or improper prior distributions may be placed on the unknown variables.
  • An additional set of cases occurs in Bayesian linear regression, where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the regression coefficients. The resulting analysis is similar to the basic cases of independent identically distributed data.

The formulas for the non-linear-regression cases are summarized in the conjugate prior article.

Sum of two quadratics

Scalar form

The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious.

This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the square. Note the following about the complex constant factors attached to some of the terms:

  1. The factor has the form of a weighted average of y and z.
  2. This shows that this factor can be thought of as resulting from a situation where the reciprocals of quantities a and b add directly, so to combine a and b themselves, it's necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the harmonic mean, so it is not surprising that is one-half the harmonic mean of a and b.
Vector form

A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k, and A and B are symmetric, invertible matrices of size , then

where

Note that the form xA x is called a quadratic form and is a scalar:

In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for each. In addition, since , only the sum matters for any off-diagonal elements of A, and there is no loss of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form

Sum of differences from the mean

Another useful formula is as follows:

where

With known variance

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with known variance σ2, the conjugate prior distribution is also normally distributed.

This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ2. Then if and we proceed as follows.

First, the likelihood function is (using the formula above for the sum of differences from the mean):

Then, we proceed as follows:

In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving μ. The result is the kernel of a normal distribution, with mean and precision , i.e.

This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters:

That is, to combine n data points with total precision of (or equivalently, total variance of n/σ2) and mean of values , derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precision-weighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.)

The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas

With known mean

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience. The prior for σ2 is as follows:

The likelihood function from above, written in terms of the variance, is:

where

Then:

The above is also a scaled inverse chi-squared distribution where

or equivalently

Reparameterizing in terms of an inverse gamma distribution, the result is:

With unknown mean and unknown variance

For a set of i.i.d. normally distributed data points X of size n where each individual point x follows with unknown mean μ and unknown variance σ2, a combined (multivariate) conjugate prior is placed over the mean and variance, consisting of a normal-inverse-gamma distribution. Logically, this originates as follows:

  1. From the analysis of the case with unknown mean but known variance, we see that the update equations involve sufficient statistics computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points.
  2. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and sum of squared deviations.
  3. Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible.
  4. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence.
  5. This suggests that we create a conditional prior of the mean on the unknown variance, with a hyperparameter specifying the mean of the pseudo-observations associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Note that each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately.
  6. This leads immediately to the normal-inverse-gamma distribution, which is the product of the two distributions just defined, with conjugate priors used (an inverse gamma distribution over the variance, and a normal distribution over the mean, conditional on the variance) and with the same four parameters just defined.

The priors are normally defined as follows:

The update equations can be derived, and look as follows:

The respective numbers of pseudo-observations add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of the additional error source stemming from the deviation between prior and data mean.

Proof

The prior distributions are

Therefore, the joint prior is

The likelihood function from the section above with known variance is:

Writing it in terms of variance rather than precision, we get:

where

Therefore, the posterior is (dropping the hyperparameters as conditioning factors):

In other words, the posterior distribution has the form of a product of a normal distribution over p(μ σ2) times an inverse gamma distribution over p2), with parameters that are the same as the update equations above.

Occurrence and applications

The occurrence of normal distribution in practical problems can be loosely classified into four categories:

  1. Exactly normal distributions;
  2. Approximately normal laws, for example when such approximation is justified by the central limit theorem; and
  3. Distributions modeled as normal – the normal distribution being the distribution with maximum entropy for a given mean and variance.
  4. Regression problems – the normal distribution being found after systematic effects have been modeled sufficiently well.

Exact normality

The ground state of a quantum harmonic oscillator has the Gaussian distribution.

Certain quantities in physics are distributed normally, as was first demonstrated by James Clerk Maxwell. Examples of such quantities are:

  • Probability density function of a ground state in a quantum harmonic oscillator.
  • The position of a particle that experiences diffusion. If initially the particle is located at a specific point (that is its probability distribution is the Dirac delta function), then after time t its location is described by a normal distribution with variance t, which satisfies the diffusion equation . If the initial location is given by a certain density function , then the density at time t is the convolution of g and the normal PDF.

Approximate normality

Approximately normal distributions occur in many situations, as explained by the central limit theorem. When the outcome is produced by many small effects acting additively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects.

Assumed normality

Histogram of sepal widths for Iris versicolor from Fisher's Iris flower data set, with superimposed best-fitting normal distribution.

I can only recognize the occurrence of the normal curve – the Laplacian curve of errors – as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations.

There are statistical methods to empirically test that assumption; see the above Normality tests section.

  • In biology, the logarithm of various variables tend to have a normal distribution, that is, they tend to have a log-normal distribution (after separation on male/female subpopulations), with examples including:
    • Measures of size of living tissue (length, height, skin area, weight);[53]
    • The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of tree bark also falls under this category;
    • Certain physiological measurements, such as blood pressure of adult humans.
  • In finance, in particular the Black–Scholes model, changes in the logarithm of exchange rates, price indices, and stock market indices are assumed normal (these variables behave like compound interest, not like simple interest, and so are multiplicative). Some mathematicians such as Benoit Mandelbrot have argued that log-Levy distributions, which possesses heavy tails would be a more appropriate model, in particular for the analysis for stock market crashes. The use of the assumption of normal distribution occurring in financial models has also been criticized by Nassim Nicholas Taleb in his works.
  • Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.[54]
  • In standardized testing, results can be made to have a normal distribution by either selecting the number and difficulty of questions (as in the IQ test) or transforming the raw test scores into "output" scores by fitting them to the normal distribution. For example, the SAT's traditional range of 200–800 is based on a normal distribution with a mean of 500 and a standard deviation of 100.
Fitted cumulative normal distribution to October rainfalls, see distribution fitting

Methodological problems and peer review

John Ioannidis argues that using normally distributed standard deviations as standards for validating research findings leave falsifiable predictions about phenomena that are not normally distributed untested. This includes, for example, phenomena that only appear when all necessary conditions are present and one cannot be a substitute for another in an addition-like way and phenomena that are not randomly distributed. Ioannidis argues that standard deviation-centered validation gives a false appearance of validity to hypotheses and theories where some but not all falsifiable predictions are normally distributed since the portion of falsifiable predictions that there is evidence against may and in some cases are in the non-normally distributed parts of the range of faslsifiable predictions, as well as baselessly dismissing hypotheses for which none of the falsifiable predictions are normally distributed as if were they unfalsifiable when in fact they do make falsifiable predictions. It is argued by Ioannidis that many cases of mutually exclusive theories being accepted as "validated" by research journals are caused by failure of the journals to take in empirical falsifications of non-normally distributed predictions, and not because mutually exclusive theories are true, which they cannot be, although two mutually exclusive theories can both be wrong and a third one correct.[56]

Computational methods

Generating values from normal distribution

The bean machine, a device invented by Francis Galton, can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve.

In computer simulations, especially in applications of the Monte-Carlo method, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a N(μ, σ2) can be generated as X = μ + σZ, where Z is standard normal. All these algorithms rely on the availability of a random number generator U capable of producing uniform random variates.

  • The most straightforward method is based on the probability integral transform property: if U is distributed uniformly on (0,1), then Φ−1(U) will have the standard normal distribution. The drawback of this method is that it relies on calculation of the probit function Φ−1, which cannot be done analytically. Some approximate methods are described in Hart (1968) and in the erf article. Wichura gives a fast algorithm for computing this function to 16 decimal places,[57] which is used by R to compute random variates of the normal distribution.
  • An easy-to-program approximate approach that relies on the central limit theorem is as follows: generate 12 uniform U(0,1) deviates, add them all up, and subtract 6 – the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be Irwin–Hall, which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (−6, 6).[58] Note that in a true normal distribution, only 0.00034% of all samples will fall outside ±6σ.
  • The Box–Muller method uses two independent random numbers U and V distributed uniformly on (0,1). Then the two random variables X and Y
    will both have the standard normal distribution, and will be independent. This formulation arises because for a bivariate normal random vector (X, Y) the squared norm X2 + Y2 will have the chi-squared distribution with two degrees of freedom, which is an easily generated exponential random variable corresponding to the quantity −2ln(U) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V.
  • The Marsaglia polar method is a modification of the Box–Muller method which does not require computation of the sine and cosine functions. In this method, U and V are drawn from the uniform (−1,1) distribution, and then S = U2 + V2 is computed. If S is greater or equal to 1, then the method starts over, otherwise the two quantities
    are returned. Again, X and Y are independent, standard normal random variables.
  • The Ratio method[59] is a rejection method. The algorithm proceeds as follows:
    • Generate two independent uniform deviates U and V;
    • Compute X = 8/e (V − 0.5)/U;
    • Optional: if X2 ≤ 5 − 4e1/4U then accept X and terminate algorithm;
    • Optional: if X2 ≥ 4e−1.35/U + 1.4 then reject X and start over from step 1;
    • If X2 ≤ −4 lnU then accept X, otherwise start over the algorithm.
    The two optional steps allow the evaluation of the logarithm in the last step to be avoided in most cases. These steps can be greatly improved[60] so that the logarithm is rarely evaluated.
  • The ziggurat algorithm[61] is faster than the Box–Muller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases, where the combination of those two falls outside the "core of the ziggurat" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed.
  • Integer arithmetic can be used to sample from the standard normal distribution.[62] This method is exact in the sense that it satisfies the conditions of ideal approximation;[63] i.e., it is equivalent to sampling a real number from the standard normal distribution and rounding this to the nearest representable floating point number.
  • There is also some investigation[64] into the connection between the fast Hadamard transform and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data.

Numerical approximations for the normal CDF and normal quantile function

The standard normal CDF is widely used in scientific and statistical computing.

The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy.

  • Zelen & Severo (1964) give the approximation for Φ(x) for x > 0 with the absolute error ε(x) < 7.5·10−8 (algorithm 26.2.17):
    where ϕ(x) is the standard normal PDF, and b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 = 1.781477937, b4 = −1.821255978, b5 = 1.330274429.
  • Hart (1968) lists some dozens of approximations – by means of rational functions, with or without exponentials – for the erfc() function. His algorithms vary in the degree of complexity and the resulting precision, with maximum absolute precision of 24 digits. An algorithm by West (2009) combines Hart's algorithm 5666 with a continued fraction approximation in the tail to provide a fast computation algorithm with a 16-digit precision.
  • Cody (1969) after recalling Hart68 solution is not suited for erf, gives a solution for both erf and erfc, with maximal relative error bound, via Rational Chebyshev Approximation.
  • Marsaglia (2004) suggested a simple algorithm[note 1] based on the Taylor series expansion
    for calculating Φ(x) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when x = 10).
  • The GNU Scientific Library calculates values of the standard normal CDF using Hart's algorithms and approximations with Chebyshev polynomials.

Shore (1982) introduced simple approximations that may be incorporated in stochastic optimization models of engineering and operations research, like reliability engineering and inventory analysis. Denoting p = Φ(z), the simplest approximation for the quantile function is:

This approximation delivers for z a maximum absolute error of 0.026 (for 0.5 ≤ p ≤ 0.9999, corresponding to 0 ≤ z ≤ 3.719). For p < 1/2 replace p by 1 − p and change sign. Another approximation, somewhat less accurate, is the single-parameter approximation:

The latter had served to derive a simple approximation for the loss integral of the normal distribution, defined by

This approximation is particularly accurate for the right far-tail (maximum error of 10−3 for z≥1.4). Highly accurate approximations for the CDF, based on Response Modeling Methodology (RMM, Shore, 2011, 2012), are shown in Shore (2005).

Some more approximations can be found at: Error function#Approximation with elementary functions. In particular, small relative error on the whole domain for the CDF and the quantile function as well, is achieved via an explicitly invertible formula by Sergei Winitzki in 2008.

History

Development

Some authors[65][66] attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738[note 2] published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of (a + b)n. De Moivre proved that the middle term in this expansion has the approximate magnitude of , and that "If m or 1/2n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval , has to the middle Term, is ."[67] Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[68]

Carl Friedrich Gauss discovered the normal distribution in 1809 as a way to rationalize the method of least squares.

In 1823 Gauss published his monograph "Theoria combinationis observationum erroribus minimis obnoxiae" where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M, M, M′′, ... to denote the measurements of some unknown quantity V, and sought the "most probable" estimator of that quantity: the one that maximizes the probability φ(MV) · φ(M′V) · φ(M′′ − V) · ... of obtaining the observed experimental results. In his notation φΔ is the probability density function of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.[note 3] Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[69]

where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linearweighted least squares method.[70]

Pierre-Simon Laplace proved the central limit theorem in 1810, consolidating the importance of the normal distribution in statistics.

Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions.[note 4] It was Laplace who first posed the problem of aggregating several observations in 1774,[71] although his own solution led to the Laplacian distribution. It was Laplace who first calculated the value of the integral et2 dt = π in 1782, providing the normalization constant for the normal distribution.[72] Finally, it was Laplace who in 1810 proved and presented to the Academy the fundamental central limit theorem, which emphasized the theoretical importance of the normal distribution.[73]

It is of interest to note that in 1809 an Irish-American mathematician Robert Adrain published two insightful but flawed derivations of the normal probability law, simultaneously and independently from Gauss.[74] His works remained largely unnoticed by the scientific community, until in 1871 they were exhumed by Abbe.[75]

In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[76] "The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is

Naming

Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual".[77] However, by the end of the 19th century some authors[note 5] had started using the name normal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus "normal". Peirce (one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain circumstances."[78] Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.[79]

Many years ago I called the Laplace–Gaussian curve the normal curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'.

Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays:

The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around the 1950s, appearing in the popular textbooks by P. G. Hoel (1947) "Introduction to mathematical statistics" and A. M. Mood (1950) "Introduction to the theory of statistics".[80]

See also

Notes

  1. ^ For example, this algorithm is given in the article Bc programming language.
  2. ^ De Moivre first published his findings in 1733, in a pamphlet "Approximatio ad Summam Terminorum Binomii (a + b)n in Seriem Expansi" that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example Walker (1985).
  3. ^ "It has been customary certainly to regard as an axiom the hypothesis that if any quantity has been determined by several direct observations, made under the same circumstances and with equal care, the arithmetical mean of the observed values affords the most probable value, if not rigorously, yet very nearly at least, so that it is always most safe to adhere to it." — Gauss (1809, section 177)
  4. ^ "My custom of terming the curve the Gauss–Laplacian or normal curve saves us from proportioning the merit of discovery between the two great astronomer mathematicians." quote from Pearson (1905, p. 189)
  5. ^ Besides those specifically referenced here, such use is encountered in the works of Peirce, Galton (Galton (1889, chapter V)) and Lexis (Lexis (1878), Rohrbasser & Véron (2003)) c. 1875.[citation needed]

References

Citations

  1. ^ Weisstein, Eric W. "Normal Distribution". mathworld.wolfram.com. Retrieved August 15, 2020.
  2. ^ Normal Distribution, Gale Encyclopedia of Psychology
  3. ^ Casella & Berger (2001, p. 102)
  4. ^ Lyon, A. (2014). Why are Normal Distributions Normal?, The British Journal for the Philosophy of Science.
  5. ^ a b "Normal Distribution". www.mathsisfun.com. Retrieved August 15, 2020.
  6. ^ Stigler (1982)
  7. ^ Halperin, Hartley & Hoel (1965, item 7)
  8. ^ McPherson (1990, p. 110)
  9. ^ Bernardo & Smith (2000, p. 121)
  10. ^ Scott, Clayton; Nowak, Robert (August 7, 2003). "The Q-function". Connexions.
  11. ^ Barak, Ohad (April 6, 2006). "Q Function and Error Function" (PDF). Tel Aviv University. Archived from the original (PDF) on March 25, 2009.
  12. ^ Weisstein, Eric W. "Normal Distribution Function". MathWorld.
  13. ^ Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 26, eqn 26.2.12". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 932. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
  14. ^ "Wolfram Alpha: Computational Knowledge Engine". Wolframalpha.com. Retrieved March 3, 2017.
  15. ^ "Wolfram Alpha: Computational Knowledge Engine". Wolframalpha.com.
  16. ^ "Wolfram Alpha: Computational Knowledge Engine". Wolframalpha.com. Retrieved March 3, 2017.
  17. ^ Reference needed
  18. ^ Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory. John Wiley and Sons. p. 254. ISBN 9780471748816.
  19. ^ Park, Sung Y.; Bera, Anil K. (2009). "Maximum Entropy Autoregressive Conditional Heteroskedasticity Model" (PDF). Journal of Econometrics. 150 (2): 219–230. CiteSeerX 10.1.1.511.9750. doi:10.1016/j.jeconom.2008.12.014. Archived from the original (PDF) on March 7, 2016. Retrieved June 2, 2011.
  20. ^ Geary RC(1936) The distribution of the "Student's" ratio for the non-normal samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184
  21. ^ Lukacs, Eugene (March 1942). "A Characterization of the Normal Distribution". Annals of Mathematical Statistics. 13 (1): 91–93. doi:10.1214/AOMS/1177731647. ISSN 0003-4851. JSTOR 2236166. MR 0006626. Zbl 0060.28509. Wikidata Q55897617.
  22. ^ a b c Patel & Read (1996, [2.1.4])
  23. ^ Fan (1991, p. 1258)
  24. ^ Patel & Read (1996, [2.1.8])
  25. ^ Papoulis, Athanasios. Probability, Random Variables and Stochastic Processes (4th ed.). p. 148.
  26. ^ Bryc (1995, p. 23)
  27. ^ Bryc (1995, p. 24)
  28. ^ Cover & Thomas (2006, p. 254)
  29. ^ Williams, David (2001). Weighing the odds : a course in probability and statistics (Reprinted. ed.). Cambridge [u.a.]: Cambridge Univ. Press. pp. 197–199. ISBN 978-0-521-00618-7.
  30. ^ Smith, José M. Bernardo; Adrian F. M. (2000). Bayesian theory (Reprint ed.). Chichester [u.a.]: Wiley. pp. 209, 366. ISBN 978-0-471-49464-5.
  31. ^ O'Hagan, A. (1994) Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference, Edward Arnold. ISBN 0-340-52922-9 (Section 5.40)
  32. ^ a b Bryc (1995, p. 35)
  33. ^ UIUC, Lecture 21. The Multivariate Normal Distribution, 21.6:"Individually Gaussian Versus Jointly Gaussian".
  34. ^ Edward L. Melnick and Aaron Tenenbein, "Misspecifications of the Normal Distribution", The American Statistician, volume 36, number 4 November 1982, pages 372–373
  35. ^ "Kullback Leibler (KL) Distance of Two Normal (Gaussian) Probability Distributions". Allisons.org. December 5, 2007. Retrieved March 3, 2017.
  36. ^ Jordan, Michael I. (February 8, 2010). "Stat260: Bayesian Modeling and Inference: The Conjugate Prior for the Normal Distribution" (PDF).
  37. ^ Amari & Nagaoka (2000)
  38. ^ "Normal Approximation to Poisson Distribution". Stat.ucla.edu. Retrieved March 3, 2017.
  39. ^ a b Das, Abhranil (2020). "A method to integrate and classify normal distributions". arXiv:2012.14331 [stat.ML].
  40. ^ Bryc (1995, p. 27)
  41. ^ Weisstein, Eric W. "Normal Product Distribution". MathWorld. wolfram.com.
  42. ^ Lukacs, Eugene (1942). "A Characterization of the Normal Distribution". The Annals of Mathematical Statistics. 13 (1): 91–3. doi:10.1214/aoms/1177731647. ISSN 0003-4851. JSTOR 2236166.
  43. ^ Basu, D.; Laha, R. G. (1954). "On Some Characterizations of the Normal Distribution". Sankhyā. 13 (4): 359–62. ISSN 0036-4452. JSTOR 25048183.
  44. ^ Lehmann, E. L. (1997). Testing Statistical Hypotheses (2nd ed.). Springer. p. 199. ISBN 978-0-387-94919-2.
  45. ^ Patel & Read (1996, [2.3.6])
  46. ^ Galambos & Simonelli (2004, Theorem 3.5)
  47. ^ a b Lukacs & King (1954)
  48. ^ Quine, M.P. (1993). "On three characterisations of the normal distribution". Probability and Mathematical Statistics. 14 (2): 257–263.
  49. ^ John, S (1982). "The three parameter two-piece normal family of distributions and its fitting". Communications in Statistics - Theory and Methods. 11 (8): 879–885. doi:10.1080/03610928208828279.
  50. ^ a b Krishnamoorthy (2006, p. 127)
  51. ^ Krishnamoorthy (2006, p. 130)
  52. ^ Krishnamoorthy (2006, p. 133)
  53. ^ Huxley (1932)
  54. ^ Jaynes, Edwin T. (2003). Probability Theory: The Logic of Science. Cambridge University Press. pp. 592–593. ISBN 9780521592710.
  55. ^ Oosterbaan, Roland J. (1994). "Chapter 6: Frequency and Regression Analysis of Hydrologic Data" (PDF). In Ritzema, Henk P. (ed.). Drainage Principles and Applications, Publication 16 (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp. 175–224. ISBN 978-90-70754-33-4.
  56. ^ Why Most Published Research Findings Are False, John P. A. Ioannidis, 2005
  57. ^ Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution". Applied Statistics. 37 (3): 477–84. doi:10.2307/2347330. JSTOR 2347330.
  58. ^ Johnson, Kotz & Balakrishnan (1995, Equation (26.48))
  59. ^ Kinderman & Monahan (1977)
  60. ^ Leva (1992)
  61. ^ Marsaglia & Tsang (2000)
  62. ^ Karney (2016)
  63. ^ Monahan (1985, section 2)
  64. ^ Wallace (1996)
  65. ^ Johnson, Kotz & Balakrishnan (1994, p. 85)
  66. ^ Le Cam & Lo Yang (2000, p. 74)
  67. ^ De Moivre, Abraham (1733), Corollary I – see Walker (1985, p. 77)
  68. ^ Stigler (1986, p. 76)
  69. ^ Gauss (1809, section 177)
  70. ^ Gauss (1809, section 179)
  71. ^ Laplace (1774, Problem III)
  72. ^ Pearson (1905, p. 189)
  73. ^ Stigler (1986, p. 144)
  74. ^ Stigler (1978, p. 243)
  75. ^ Stigler (1978, p. 244)
  76. ^ Maxwell (1860, p. 23)
  77. ^ Jaynes, Edwin J.; Probability Theory: The Logic of Science, Ch. 7.
  78. ^ Peirce, Charles S. (c. 1909 MS), Collected Papers v. 6, paragraph 327.
  79. ^ Kruskal & Stigler (1997).
  80. ^ "Earliest uses... (entry STANDARD NORMAL CURVE)".

Sources

External links