Unverified 提交 7779b07b authored 作者: theorashid's avatar theorashid 提交者: GitHub

Fix typos in normalizing_flows_in_pytensor notebook (#1611)

上级 46f8227d
...@@ -34,7 +34,7 @@ ...@@ -34,7 +34,7 @@
"- $f(\\cdot)$ if the (known!) PDF of the variable $X$\n", "- $f(\\cdot)$ if the (known!) PDF of the variable $X$\n",
"- $G(\\cdot)$ is a function with nice properties.\n", "- $G(\\cdot)$ is a function with nice properties.\n",
"\n", "\n",
"The \"nice properties\" require (in the most general case) that $G(x)$ is a $C^1$ diffeomorphism, which means that it is 1) continuous and differentiable almost everywhere; 2) it is bijective, and 3) its derivaties are also bijective. \n", "The \"nice properties\" require (in the most general case) that $G(x)$ is a $C^1$ diffeomorphism, which means that it is 1) continuous and differentiable almost everywhere; 2) it is bijective, and 3) its derivatives are also bijective. \n",
"\n", "\n",
"A simpler requirement is that $G(x)$ is continuous, bijective, and monotonic. That will get us 99% of the way there. Hey, $\\exp$ is continuous, bijective, and monotonic -- what a coincidence!\n" "A simpler requirement is that $G(x)$ is continuous, bijective, and monotonic. That will get us 99% of the way there. Hey, $\\exp$ is continuous, bijective, and monotonic -- what a coincidence!\n"
] ]
...@@ -412,7 +412,7 @@ ...@@ -412,7 +412,7 @@
], ],
"source": [ "source": [
"z_values = pt.dvector(\"z_values\")\n", "z_values = pt.dvector(\"z_values\")\n",
"# The funtion `pm.logp` does the magic!\n", "# The function `pm.logp` does the magic!\n",
"z_logp = pm.logp(z, z_values, jacobian=True)\n", "z_logp = pm.logp(z, z_values, jacobian=True)\n",
"# We do this rewrite to make the computation more stable.\n", "# We do this rewrite to make the computation more stable.\n",
"rewrite_graph(z_logp).dprint()" "rewrite_graph(z_logp).dprint()"
...@@ -668,7 +668,7 @@ ...@@ -668,7 +668,7 @@
"id": "5f9a7a50", "id": "5f9a7a50",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Theese distribution are essentially the same." "These distribution are essentially the same."
] ]
}, },
{ {
...@@ -715,7 +715,7 @@ ...@@ -715,7 +715,7 @@
"\n", "\n",
"So, the inverse of their composition is $G^{-1} \\equiv (J^{-1} \\circ H^{-1}) = J^{-1}(H^{-1}(x)) = J^{-1}(\\ln(x)) = \\frac{\\ln(x) - a}{b}$\n", "So, the inverse of their composition is $G^{-1} \\equiv (J^{-1} \\circ H^{-1}) = J^{-1}(H^{-1}(x)) = J^{-1}(\\ln(x)) = \\frac{\\ln(x) - a}{b}$\n",
"\n", "\n",
"For the correction term, we need the determinant of the jacobian. Since $G$ is a scalar function, this is just the absolutel value of the gradient:\n", "For the correction term, we need the determinant of the jacobian. Since $G$ is a scalar function, this is just the absolute value of the gradient:\n",
"\n", "\n",
"$$\\left | \\frac{\\partial}{\\partial x}G^{-1} \\right | = \\left | \\frac{\\partial}{\\partial x} \\frac{\\ln(x) - a}{b} \\right | = \\left | \\frac{1}{b} \\cdot \\frac{1}{x} \\right | $$\n", "$$\\left | \\frac{\\partial}{\\partial x}G^{-1} \\right | = \\left | \\frac{\\partial}{\\partial x} \\frac{\\ln(x) - a}{b} \\right | = \\left | \\frac{1}{b} \\cdot \\frac{1}{x} \\right | $$\n",
"\n", "\n",
...@@ -733,7 +733,7 @@ ...@@ -733,7 +733,7 @@
"source": [ "source": [
"### Solution by hand\n", "### Solution by hand\n",
"\n", "\n",
"We now implement theis analytic procesure in PyTensor:" "We now implement this analytic procedure in PyTensor:"
] ]
}, },
{ {
...@@ -803,7 +803,7 @@ ...@@ -803,7 +803,7 @@
"id": "bcd081d3", "id": "bcd081d3",
"metadata": {}, "metadata": {},
"source": [ "source": [
"We can verify these values are exaclty what we are expecting:" "We can verify these values are exactly what we are expecting:"
] ]
}, },
{ {
...@@ -859,7 +859,7 @@ ...@@ -859,7 +859,7 @@
"id": "46834a6f", "id": "46834a6f",
"metadata": {}, "metadata": {},
"source": [ "source": [
"As above let's verify taht the results are consistent and correct:" "As above let's verify that the results are consistent and correct:"
] ]
}, },
{ {
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论