Friday, December 27, 2019

Unwinding confusion about (bi)rational maps

I am often confused with birational maps, or just rational maps in general, because they are not quite maps in the usual sense. Let me try to undo my confusion by rewriting some materials on these confusing objects in this posting. I am following Vakil's online book.

Given schemes $X, Y,$ we define a rational map $X \dashrightarrow Y$ to be an equivalence class of scheme maps $\alpha : U \rightarrow Y,$ where $U \subset X$ is a dense open subset with the following equivalence relation: $(\alpha, U) \sim (\beta, V)$ means there is a dense open subset $W \subset U \cap V$ such that $\alpha|_{W} = \beta|_{W}.$

Remark. We often assume that $X$ is reduced because then dense open subsets are precisely scheme-theoretically dense open subschemes of $X,$ the notion that works better than the usual (topological) density in various situations. Using the latter notion gives an analogous notion to that of rational maps on $X,$ which seems to be studied by some people.

A rational map $X \dashrightarrow Y$ is said to be dominant if there is any representative $U \rightarrow Y$ whose image is dense in $Y.$ A dominant rational map $\pi : X \dashrightarrow Y$ is called birational if there is another dominant rational map $\phi : Y \dashrightarrow X$ such that

  • $\pi \circ \phi \sim \mathrm{id}_{Y}$ and
  • $\phi \circ \pi \sim \mathrm{id}_{X},$

on some dense open subsets (of $X$ and $Y,$ respectively).

When $X$ and $Y$ are reduced, this complicated looking notion means some thing simple: that is $X$ and $Y$ are birational if and only if there are some isomorphic dense open subsets $U \subset X$ and $V \subset Y.$

Let $k$ be a field. Then integral finite type $k$-schemes with dominant rational maps form a category. If we have a morphism $X \dashrightarrow Y,$ it sends the generic point of $X$ to that of $Y,$ so we get a corresponding function field extension $K(Y) \rightarrow K(X).$ It turns out that this establishes an equivalence between the category of integral finite type $k$-schemes with dominant rational maps over $k$ and the category of field extensions that are finitely generated over $K.$ (See Proposition 6.5.7 and 6.5.D in Vakil.)

Separatedness makes rational maps easier. If the target of a rational map is separated (over some base scheme), then it is easier to study the rational map, essentially due to the following fact (from 10.2.2. of Vakil):

Theorem (Reduced-to-Separated). Let $X, Y$ be schemes over a scheme $S.$ Suppose that $X$ is reduced and $Y$ is separated over $S.$ If two $S$-scheme maps from $X$ to $Y$ agree on a dense open subset of $X,$ then they must be identical.

How does this help us when studying rational maps? Let $X$ be reduced, defined over a field $k,$ and $Y$ separated over $k.$ A rational map $X \dashrightarrow Y$ defined over $k$ is giving a $k$-scheme map $U \rightarrow Y,$ where $U \subset X$ is a dense open subset. If there is another representative $V \rightarrow Y,$ the above theorem now tells us that the two representatives agree on $U \cap V.$ This implies that we can glue these two maps to get another representative $U \cup V \rightarrow Y.$ Note that we can glue arbitrarily many representatives this way, and get a scheme map $W \rightarrow Y,$ where $W$ is the union of all dense open subsets coming from the representatives. This $W$ is hence unique and deserves a name. It is called the domain of definition of the rational map $X \dashrightarrow Y.$

This is great because in this situation (where the source is reduced and the target is separated), we can get an actual map that contains the information about a rational map. This is much less confusing!

Thursday, December 19, 2019

Upper semicontinuity

This is a notion that I had to look up whenever I face it. Hence, I decided to write it down in my own terms so that I don't have to look it up as many times. In Vakil (11.4), this notion is defined as follows: given a topological space $X,$ a function $f : X \rightarrow \mathbb{R}$ is called upper semi-continuous if $f^{-1}((-\infty, a)) \subset X$ is open for any $a \in \mathbb{R}.$ This is a formal definition, and experts don't seem to think this way. This is a condition on $f$ so that it can only jump up when taking limits.

Theorem. The following are equivalent:

  • $f$ is upper semi-continuous;
  • for any sequence $(x_{n})$ in $X$ (with possibly uncountable indices $n$) such that $f(x_{n}) \geq f(x),$ if $x_{n} \rightarrow x$ in $X,$ then $f(x_{n}) \rightarrow f(x)$ in $\mathbb{R}.$

Proof. Let $f$ be upper semi-continuous. Given a sequence $(x_{n})$ in $X$ such that $f(x_{n}) \geq f(x),$ if $x_{n} \rightarrow x$ in $X,$ fix any $\epsilon > 0.$ Then $f^{-1}((-\infty, f(x) + \epsilon)) \subset X$ is open by upper semi-continuity of $f$ and since it contains $x,$ knowing that $x_{n} \rightarrow x$ in $X,$ we may find some $x_{n} \in f^{-1}((-\infty, f(x) + \epsilon)).$ This implies that $$f(x_{n}) \in (-\infty, f(x) + \epsilon) \subset \mathbb{R}.$$ Since $f(x_{n}) \geq f(x) > f(x) - \epsilon,$ we have $$f(x_{n}) \in (f(x) - \epsilon, f(x) + \epsilon) \subset \mathbb{R}.$$ This shows that $f(x_{n}) \rightarrow f(x)$ in $\mathbb{R}.$

Conversely, suppose the second condition above. Fix $a \in \mathbb{R},$ and we aim to show that $f^{-1}((-\infty, a)) \subset X$ is open. Fix any $x \in f^{-1}((-\infty, a)).$ We want to show that there is an open neighborhood $U \ni x$ in $X$ such that $U \subset f^{-1}((-\infty, a)).$ We now work by contradiction and say that there is no such $U.$ In other words, for any neighborhood $U_{n} \ni x$ in $X,$ there exists $x_{n} \in U_{n}$ such that $f(x_{n}) \geq a > f(x).$ However, using the second condition, this implies that $$f(x) = \lim_{n \rightarrow \infty}f(x_{n}) \geq a > f(x),$$ a contradiction. This finishes the proof. $\Box$

Saturday, December 14, 2019

Sheaves on small étale sites (possibly to be extended)

We follow Milne's book. I think Patrick Kelley for summarizing this material before when we studied together.

We have explained the definition of a site in this posting. The target example is $\textbf{Et}_{X},$ the site of étale maps into a scheme $X,$ whose maps between them are (étale) maps over $X.$

A site is a category $\mathcal{C}$ together with coverings for each of its object. A presheaf valued in $\textbf{Set}$ on $\mathcal{C}$ is a functor $\mathscr{F} : \mathcal{C}^{\mathrm{op}} \rightarrow \textbf{Set}.$ We may change the target category $\textbf{Set}$ into other categories to define analogous definitions. Maps between two presheaves are given by natural transformations, and thus, presheaves on $\mathcal{C}$ form a category.

A presheaf $\mathscr{F} : \mathcal{C}^{\mathrm{op}} \rightarrow \textbf{Set}$ is said to be a sheaf if it satsifies the following two extra axioms for every object $U$ of $\mathscr{C}.$

  1. Given any covering $\{U_{i} \rightarrow U\}_{i \in I}$ of $U$ and $s, t \in \mathscr{F}(U),$ if $s|_{U_{i}} = t|_{U_{i}}$ for all maps $U_{i} \rightarrow U$ in the covering, then $s = t.$
  2. Given any covering $\{U_{i} \rightarrow U\}_{i \in I}$ of $U$ and $s_{i} \in \mathscr{F}(U_{i})$ for each map $U_{i} \rightarrow U$ in the covering, if $s_{i}|_{U_{i} \times_{U} U_{j}} = s_{j}|_{U_{i} \times_{U} U_{j}}$ for all maps $U_{i} \rightarrow U$ and $U_{j} \rightarrow U$ in the covering, then there is $s \in \mathscr{F}(U)$ such that $s|_{U_{i}} = s_{i}$ for all maps $U_{i} \rightarrow U$ in the covering.

Note that sheaves on $\mathcal{C}$ form a full subcategory of the category of presheaves on $\mathcal{C}.$ We now focus on the case $\mathcal{C} = \textbf{Et}_{X}$ for the following proposition, which we learn from Milne (II. Proposition 1.5). We write $\textbf{Zar}_{X}$ to mean the small Zariski site of a scheme $X.$ We will take the values of sheaves in $\textbf{Ab}.$

Proposition. Let $\mathscr{F}$ be a presheaf on $\textbf{Et}_{X}.$ Then $\mathscr{F}$ is a sheaf if and only if the following two properties are satisfied.

  1. For any object $U$ of $\textbf{Et}_{X},$ the restriction of $\mathscr{F}$ to $\textbf{Zar}_{U}$ is a sheaf.
  2. For any covering $\{V \rightarrow U\},$ consisting of a single map of $\textbf{Et}_{X}$ such that $V$ and $U$ are affine, the sequence $\mathscr{F}(U) \rightarrow \mathscr{F}(V) \rightrightarrows \mathscr{F}(V \times_{U} V)$ given by restrictions is an equalizer.

Sketch of proof. It is evident that sheaf axioms of $\mathscr{F}$ implies the two given conditions, noting that open embeddings are of étale. 

For the converse, our goal is to check that the sequence $$\mathscr{F}(U) \rightarrow \prod_{i \in I}\mathscr{F}(U_{i}) \rightrightarrows \prod_{i,j \in I}\mathscr{F}(U_{i} \times_{U} U_{j})$$ given by the restrictions is an equalizer.

Fix any covering $\{U_{i} \rightarrow U\}_{i \in I}$ in $\textbf{Et}_{X}$ and consider $V := \bigsqcup_{i \in I}U_{i}.$ The first condition implies that we have $$\mathscr{F}(V) \simeq \prod_{i \in I}\mathscr{F}(U_{i})$$ induced by the restrictions. Note that we have $$V \times_{U} V \simeq \bigsqcup_{i, j \in I} (U_{i} \times_{U} U_{j}),$$ given by showing that the right-hand side satisfies the universal property of the left-hand side. Having this in mind, one may check the our goal reduces to checking that the sequence $$\mathscr{F}(U) \rightarrow \mathscr{F}(V) \rightrightarrows \mathscr{F}(V \times_{U} V)$$ is an equalizer. The second condition tells us that this holds when $I$ is finite and each $U_{i}$ for $i \in I$ and $U$ are affine.

For the general case, denote by $\pi : V \rightarrow U$ be the morphism induced by maps $U_{i} \rightarrow U$ from the fixed covering of $U.$ We may write $U = \bigcup_{j \in J} W_{j},$ where each $W_{j} \subset U$ is affine open. Then $\pi^{-1}(W_{j}) \subset V$ is open, so we may write $$\pi^{-1}(W_{j}) = \bigcup_{k \in K_{j}} W_{j,k},$$ where $W_{j,k} \subset V$ are affine opens. Note that $\{W_{j,k} \rightarrow W_{j}\}_{k \in K_{j}}$ forms an étale cover of $W_{j},$ where the maps are given by $$W_{j,k} \hookrightarrow \pi^{-1}(U_{j}) \xrightarrow{\pi} W_{j}.$$ Since $\pi(W_{j,k}) \subset W_{j}$ is open and $W_{j}$ is quasi-compact, we may assume that $K_{j}$ is finite for such étale covering of $W_{j}.$ We have $$V = \bigcup_{j \in J}\pi^{-1}(W_{j}) = \bigcup_{j \in J, k \in K_{j}} W_{j,k}.$$

So far, we know that the following sequences are equalizers:
  • $\mathscr{F}(U) \rightarrow \prod_{j \in J}\mathscr{F}(W_{j}) \rightrightarrows \prod_{j,j' \in J}\mathscr{F}(W_{j} \cap W_{j'})$ by the first condition;
  • $\mathscr{F}(V) \rightarrow \prod_{j \in J}\prod_{k \in J_{j}}\mathscr{F}(W_{j,k}) \rightrightarrows \prod_{j,j' \in J}\prod_{k \in K_{j}, k' \in K_{j'}}\mathscr{F}(W_{j,k} \cap W_{j',k'})$ by the first condition;
  • $\mathscr{F}(W_{j}) \rightarrow \prod_{k \in K_{j}}\mathscr{F}(W_{j,k}) \rightrightarrows \prod_{k,l \in K_{j}} \mathscr{F}(W_{j,k} \times_{U} W_{j,l})$ since $K_{j}$ is a finite set and $W_{j}, W_{j,k}$ are affines.

We are now ready to show that $$\mathscr{F}(U) \rightarrow \mathscr{F}(V) \rightrightarrows \mathscr{F}(V \times_{U} V)$$ is an equalizer. Given any $s \in \mathscr{F}(U),$ suppose that $s|_{V} = 0 \in \mathscr{F}(V).$ This means that $s|_{W_{j,k}} = 0 \in \mathscr{F}(W_{j,k}),$ for all $j \in J$ and $k \in K_{j}.$ Hence, we have $s|_{W_{j}} = 0 \in \mathscr{F}(W_{j})$ for all $j \in J,$ which implies that $s = 0 \in \mathscr{F}(U).$ This already shows that $\mathscr{F}$ is a separated presheaf (meaning a presheaf with the first condition for being a sheaf), and we will use this observation soon.

Denote by $p : V \times_{U} V \rightarrow V$ the projection on the first component and $q : V \times_{U} V \rightarrow V$ the second. Given any $u \in \mathscr{F}(V),$ suppose that $$p^{*}u = q^{*}u\in \mathscr{F}(V \times_{U} V).$$ Then $$(p^{*}u)|_{W_{j,k} \times_{U} W_{j,l}} = (q^{*}u)|_{W_{j,k} \times_{U} W_{j,l}} \in \mathscr{F}(W_{j,k} \times_{U} W_{j,l})$$ for all $j \in J$ and $k, l \in K_{j}.$ Define $(t_{j,k}) \in \prod_{k \in K_{j}}\mathscr{F}(W_{j,k})$ by $$t_{j,k}|_{W_{j,k} \times_{U} W_{j,l}} := (p^{*}u)|_{W_{j,k} \times_{U} W_{j,l}}$$ and $$t_{j,l}|_{W_{j,k} \times_{U} W_{j,l}} := (q^{*}u)|_{W_{j,k} \times_{U} W_{j,l}}.$$ Then we have a unique $t_{j} \in \mathscr{F}(W_{j})$ such that $t_{j}|_{W_{j,k}} = t_{j,k}$ for all $j, k.$

We will be done as soon as we can show that $t_{j}|_{W_{j} \cap W_{j'}} = t_{j'}|_{W_{j} \cap W_{j'}}$ for all $j, j' \in J.$ The reason is that this would give us a unique $t \in \mathscr{F}(U)$ such that $t|_{W_{j}} = t_{j}$ for all $j \in J$ and since $$(t|_{V})|_{W_{j,k}} = (t_{j})|_{W_{j,k}} = t_{j,k} = u|_{W_{j,k}}$$ for all $j, k,$ as we can check by applying one more map, we must have $t|_{V} = u,$ as desired.

To check $t_{j}|_{W_{j} \cap W_{j'}} = t_{j'}|_{W_{j} \cap W_{j'}},$ it is enough to check $$t_{j}|_{W_{j,k} \cap W_{j',k'}} = t_{j'}|_{W_{j,k} \cap W_{j',k'}},$$ where $k \in K_{k}$ and $k' \in K_{j'}.$ This is the same as checking $$t_{j,k}|_{W_{j,k} \cap W_{j',k'}} = t_{j',k'}|_{W_{j,k} \cap W_{j',k'}},$$ which we already know. This finishes the proof. $\Box$

Monday, December 2, 2019

Computing differential 1-forms of an algebra

We follow Chapter 2 of Bosch, Lütkebohmert, and Raynaud and notes by Hochster, as well as other references such as Vakil. By a ring, we will mean a commutative unital ring.

Let $R$ be a ring and $A$ an $R$-algebra. Given an $A$-module $M,$ an $R$-linear map $D : A \rightarrow M$ such that $$D(fg) = fD(g) + gD(f)$$ for all $f, g \in A$ is called an $R$-derivation of $A$ into $M$.

Remark. Note that $D(1) = D(1) + D(1)$ so that $D(1) = 0.$ Hence, for any $r \in R,$ we have $D(r) = D(r \cdot 1) = rD(1) = 0.$ That is, the derivative of any constant (i.e., element of $R$) is zero!

Warning. The only $R$-derivation $D : A \rightarrow M$ that is $A$-linear is the zero map, due to a similar argument as above. Whenever I get confused, I call $D$ an $R$-linear derivation instead.

We denote by $\mathrm{Der}_{R}(A, M)$ the set of all $R$-derivations of $A$ into $M,$ which is also an $A$-module given by the $A$-module structure of $M.$ The module of relative differential forms (of degree 1) of $A$ over $R$ is an $A$-module $\Omega^{1}_{A/R}$ with an $R$-derivation $d = d_{A/R} : A \rightarrow \Omega^{1}_{A/R},$ which is called the exterior differential, such that $$\mathrm{Hom}_{A}(\Omega^{1}_{A/R}, M) \simeq \mathrm{Der}_{R}(A, M)$$ in $\textbf{Set}$ given by $\phi \mapsto \phi \circ d.$ That is, the functor $\mathrm{Der}_{R}(A, -) : \textbf{Mod}_{A} \rightarrow \textbf{Set}$ is representable. Of course, this functor can be seen as $\textbf{Mod}_{A} \rightarrow \textbf{Mod}_{A},$ but we do not need this. It is abstract nonsense to show that such $d$ is unique, and one may construct it as follows.

Construction of the exterior differential. A formal construction of $d : A \rightarrow \Omega^{1}_{A/R}$ for the sake of existence is quite easy. That is, we take $$\Omega^{1}_{A/R} = \frac{\bigoplus_{a \in A} A da}{(d(a + b) - da - db, d(ab) - adb - bda, d(ra) - rda)_{a, b \in A, r \in R}}.$$ Taking $d : A \rightarrow \Omega^{1}_{A/R}$ by $a \mapsto da,$ we get what we need.

In practice, such a formal construction is quite useless, so we give better descriptions of $d : A \rightarrow \Omega^{1}_{A/R}.$

Lemma. Let $A := R[t_{i}]_{i \in I},$ a free $R$-algebra. Then we have $\Omega^{1}_{A/R} = \bigoplus_{i \in I} A dt_{i},$ where $d : A \rightarrow \bigoplus_{i \in I} A dt_{i}$ is given by $$df = \sum_{i \in I}\frac{\partial f}{\partial t_{i}} dt_{i}.$$ That is, the exterior derivative is given by taking the gradient.

Proof. Using the product rule for partial derivatives of polynomials, we check that $d$ is indeed an $R$-derivation. To check the universal property, consider any $R$-linear derivation $D : A \rightarrow M,$ the only way to define $A$-module map $\phi : \Omega^{1}_{A/R} \rightarrow M$ such that $D = \phi \circ d$ is to send $df$ to $Df,$ so in particular, we send $dt_{i}$ to $Dt_{i},$ which actually constructs such $\phi$ as $\Omega^{1}_{A/R}$ is free $A$-algebra over such elements $dt_{i}.$ This finishes the construction for this case. $\Box$

In general, an $R$-algebra $A$ is isomorphic to an $R$-algebra of the form $$R[t_{i}]_{i \in I}/\mathfrak{b},$$ where $\mathfrak{b} \subset R[t_{i}]_{i \in I}$ is an ideal. The next statement computes $d : A \rightarrow \Omega^{1}_{A/R}$ with respect to such a presentation. Now, the following, together with Lemma computes the exterior derivation in this case.

Theorem.  Let $$A := B/\mathfrak{b},$$ where $B$ is an $R$-algebra and $\mathfrak{b} \subset B$ is an ideal. Then we can construct $$\Omega^{1}_{A/R} = \frac{\Omega^{1}_{B/R}}{\mathfrak{b}\Omega^{1}_{B/R} + Bd_{B/R}\mathfrak{b}},$$ where $d_{A/R} : A \rightarrow \Omega^{1}_{A/R}$ is given by $\bar{b} \mapsto \overline{d_{B/R}b}.$

Proof. Let $\Omega^{1}_{A/R}$ be as described in the statement. Then $\mathfrak{b}\Omega^{1}_{A/R} = 0,$ so $\Omega^{1}_{A/R}$ is a module over $A = B/\mathfrak{b}.$ The map $B \rightarrow \Omega^{1}_{A/R}$ given by $b \mapsto \overline{d_{B/R}b}$ is a $B$-linear map that kills $\mathfrak{b},$ so the induced map $d_{A/R} : \bar{b} \mapsto \overline{d_{B/R}b}$ is a well-defined module map over $A = B/\mathfrak{b}.$

To check the universal property, let $D : A \rightarrow M$ be any $R$-derivation. This is the same as saying that $M$ is a $B$-module such that $\mathfrak{b}M = 0$ and that we have an $R$-derivation $\tilde{D} : B \rightarrow M$ such that $\tilde{D}\mathfrak{b} = 0,$ given by $\tilde{D}b = D\bar{b}.$ Hence, there is a unique $B$-module map $\tilde{\phi} : \Omega^{1}_{B/R} \rightarrow M$ such that $d_{R/B}b \mapsto \tilde{D}b = D\bar{b}.$ This map kills $\mathfrak{b}\Omega^{1}_{B/R} + Bd_{B/R}\mathfrak{b},$ which gives an $A$-module map $\phi : \Omega^{1}_{A/R} \rightarrow M$ such that $\phi \circ d_{A/R} = D.$ The uniqueness of $\phi$ easily follows, which finishes the proof. $\Box$

Application. If $A = B/\mathfrak{b}$ with $B = R[x_{i}]_{i \in I},$ then we have $$\Omega^{1}_{B/R} = \bigoplus_{i \in I} B dx_{i}$$ so that $$\frac{\Omega^{1}_{B/R}}{\mathfrak{b}\Omega^{1}_{B/R}} = \frac{\bigoplus_{i \in I} B dx_{i}}{\bigoplus_{i \in I} \mathfrak{b}B dx_{i}} \simeq \bigoplus_{i \in I} A dx_{i}.$$ We can write $\mathfrak{b} = (f_{j})_{j \in J}$ for some family $f_{j} \in B$ so that $$d_{B/R}\mathfrak{b} = \{df_{j}\}_{j \in J} = \left\{\sum_{i \in I}\frac{\partial f_{j}}{\partial x_{i}} dx_{i}\right\}_{j \in J}.$$ This implies that under the above isomorphism, we have $$\frac{\mathfrak{b}\Omega^{1}_{B/R} + Bd_{B/R}\mathfrak{b}}{\mathfrak{b}\Omega^{1}_{B/R}} \simeq A\left\{\sum_{i \in I}\frac{\partial f_{j}}{\partial x_{i}} dx_{i}\right\}_{j \in J}.$$ Therefore, we may write $$\Omega^{1}_{A/R} = \bigoplus_{i \in I} A dx_{i} \Biggm/ A\left\{\sum_{i \in I}\frac{\partial f_{j}}{\partial x_{i}} dx_{i}\right\}_{j \in J}.$$ This is the content of Key fact 21.2.3 in Vakil. The author uses the last description to construct $\Omega^{1}_{A/R}.$

Jacobian description of differential $1$-forms. This is from 21.2.E in Vakil. Again, when $A = B/\mathfrak{b},$ where $B$ is an $R$-algebra and $\mathfrak{b} \subset B$ is an ideal, we have $$\Omega_{A/R}^{1} = \frac{\Omega^{1}_{B/R}}{\mathfrak{b}\Omega^{1}_{B/R} + Bd_{B/R}\mathfrak{b}},$$ and we saw this formally, but let's try to imbue some context now.

Let $B = k[x_{1}, \dots, x_{n}],$ where $k$ is a field so that we may write $\mathfrak{b} = (f_{1}, \dots, f_{r})$ for some $f_{j} \in B.$ That is, we consider the case $$A = \frac{k[x_{1}, \dots, x_{n}]}{(f_{1}, \dots, f_{r})}$$ a finitely presented algebra over $k.$ We have $$\Omega_{B/R}^{1} = B dx_{1} \oplus \cdots \oplus B dx_{n},$$ so $$\begin{align*} \frac{\Omega_{B/R}^{1}}{\mathfrak{b}\Omega_{B/R}^{1}} &= \frac{B dx_{1} \oplus \cdots \oplus B dx_{n}}{\mathfrak{b} dx_{1} \oplus \cdots \oplus \mathfrak{b} dx_{n}} \\ &\simeq (B/\mathfrak{b}) dx_{1} \oplus \cdots \oplus (B/\mathfrak{b}) dx_{n} \\ &= A dx_{1} \oplus \cdots \oplus A dx_{n}. \end{align*}$$ Since $$\Omega_{A/R}^{1} = \frac{\Omega_{B/R}^{1}}{\mathfrak{b}\Omega_{B/R}^{1} + Bd_{B/R}\mathfrak{b}} \simeq \frac{\Omega_{B/R}^{1}/\mathfrak{b}\Omega_{B/R}^{1}}{d_{B/R}\mathfrak{b}(\Omega_{B/R}^{1}/\mathfrak{b}\Omega_{B/R}^{1})},$$ we may write $$\Omega_{A/R}^{1} = \frac{A dx_{1} \oplus \cdots \oplus A dx_{n}}{(df_{1}, \dots, df_{r})}.$$ Note that $$df_{j} = \frac{\partial f_{j}}{\partial x_{1}} dx_{1} + \cdots + \frac{\partial f_{j}}{\partial x_{n}} dx_{n}$$ in $A dx_{1} \oplus \cdots \oplus A dx_{n}.$

As a result, we have the exact sequence $$A dy_{1} \oplus \cdots \oplus A dy_{r} \xrightarrow{J} A dx_{1} \oplus \cdots \oplus A dx_{n} \rightarrow \Omega_{A/R}^{1} \rightarrow 0,$$ where $J$ can be described as an $A$-linear map $J : A^{\oplus r} \rightarrow A^{\oplus n}$ given by the following Jacobian matrix $$J = \left[ \frac{\partial f_{j}}{\partial x_{i}} \right]_{1 \leq i, j \leq n}$$ whose entries are in $A = k[x_{1}, \dots, x_{n}]/(f_{1}, \dots, f_{r}).$

Localization. If we have any ring map $B \rightarrow A$ and $S \subset A$ is a mulitplicative submonoid, then we have $$\Omega^{1}_{S^{-1}A/B} \simeq S^{-1}\Omega^{1}_{A/B},$$ as $S^{-1}A$-module. Moreover, one may check that the map $$S^{-1}A \rightarrow S^{-1}\Omega^{1}_{A/B}$$ given by $$\frac{a}{s} \mapsto \frac{sd_{A/B}(a) - ad_{A/B}(s)}{s^{2}}$$ is a well-defined $B$-linear derivation that is compatible with $$d_{S^{-1}A/B} : S^{-1}A \rightarrow \Omega^{1}_{S^{-1}A/B}$$ and the isomorphism given above.

Example. Let $k$ be a field. We have $$\Omega^{1}_{k(t_{1}, \dots, t_{n})/k} = k(t_{1}, \dots, t_{n})dt_{1} \oplus \cdots \oplus k(t_{1}, \dots, t_{n})dt_{n},$$ whose exterior derivative can be given by the quotient rule.

Example. Again, let $k$ be a field and let $A := k(t_{1}, \dots, t_{n})[x_{1}, \dots, x_{m}].$ We want to compute $\Omega^{1}_{A/k}.$ We have $$A = k(t_{1}, \dots, t_{n})[x_{1}, \dots, x_{m}] = S^{-1}k[t_{1}, \dots, t_{n}, x_{1}, \dots, x_{m}],$$ where $$S = k[t_{1}, \dots, t_{n}] \setminus (0) \subset k[t_{1}, \dots, t_{n}] \subset k[t_{1}, \dots, t_{n}, x_{1}, \dots, x_{m}].$$ Hence, we have $$\begin{align*}\Omega^{1}_{A/k} &= S^{-1}\Omega^{1}_{k[t_{1}, \dots, t_{n}, x_{1}, \dots, x_{m}]/k} \\ &= S^{-1}\left(\bigoplus_{i=1}^{m}k[t_{1}, \dots, t_{n}, x_{1}, \dots, x_{m}]dt_{i} \oplus \bigoplus_{j=1}^{n}k[t_{1}, \dots, t_{n}, x_{1}, \dots, x_{m}]dx_{j}\right) \\ &= \bigoplus_{i=1}^{m}k(t_{1}, \dots, t_{n})[x_{1}, \dots, x_{m}]dt_{i} \oplus \bigoplus_{j=1}^{n}k(t_{1}, \dots, t_{n})[x_{1}, \dots, x_{m}]dx_{j}, \end{align*}$$ which is quite concrete.

Example. We keep using $k$ to denote the base field. Then consider $$A = \frac{k(t_{1}, \dots, t_{n})[x_{1}, \dots, x_{n}]}{(f_{1}(\boldsymbol{t}, x_{1}, \dots, x_{n}), \cdots, f_{r}(\boldsymbol{t}, x_{1}, \dots, x_{n}))}.$$ Write $$B = k(t_{1}, \dots, t_{n})[x_{1}, \dots, x_{n}]$$ and $$\mathfrak{b} = (f_{1}(\boldsymbol{t}, x_{1}, \dots, x_{n}), \cdots, f_{r}(\boldsymbol{t}, x_{1}, \dots, x_{n})).$$ We have $$\Omega^{1}_{B/k} = \bigoplus_{i=1}^{m}k(\boldsymbol{t})[\boldsymbol{x}]dt_{i} \oplus \bigoplus_{j=1}^{n}k(\boldsymbol{t})[\boldsymbol{x}]dx_{j}$$ and $$\begin{align*}d_{B/k}\mathfrak{b} &= \{df_{1}, \dots, df_{r}\} \\ &= \left\{ \sum_{i=1}^{m}(f_{l})_{t_{i}} dt_{i} + \sum_{j=1}^{n}(f_{l})_{x_{j}} dx_{i} \right\}_{l=1}^{r}\end{align*}.$$ This lets us compute $$\begin{align*} \Omega^{1}_{A/k} &= \frac{\Omega^{1}_{B/k}}{\mathfrak{b}\Omega^{1}_{B/k} + Bd_{B/k}\mathfrak{b}} \\ &\simeq \frac{\Omega^{1}_{B/k}/\mathfrak{b}\Omega^{1}_{B/k}}{(\mathfrak{b}\Omega^{1}_{B/k} + Bd_{B/k}\mathfrak{b})/\mathfrak{b}\Omega^{1}_{B/k}} \\ &\simeq \frac{\bigoplus_{i=1}^{m}Adt_{i} \oplus \bigoplus_{j=1}^{n}Adx_{j}
}{A\left\{ \sum_{i=1}^{m}(f_{l})_{t_{i}} dt_{i} + \sum_{j=1}^{n}(f_{l})_{x_{j}} dx_{j} \right\}_{l=1}^{r}}, \end{align*}$$ which is very explicit.

Diagonal description. Consider a scheme map $\mathrm{Spec}(A) \rightarrow \mathrm{Spec}(R)$ between affine schemes. The diagonal map $\Delta_{A/R} : \mathrm{Spec}(A) \rightarrow \mathrm{Spec}(A) \times_{R} \mathrm{Spec}(A)$ is the scheme map induced by the ring map $A \otimes_{R} A \rightarrow A$ given by $a \otimes a' \mapsto aa'$ (as also remarked the proof of Proposition 10.1.3 in Vakil.) Denote by $I$ the kernel of this ring map, and consider the map $d : A \rightarrow I/I^{2}$ given by $f \mapsto 1 \otimes f - f \otimes 1$ modulo $I^{2}.$

Remark. The map $A \otimes A \rightarrow A$ given above is called the multiplication map of $A$ over $R,$ because it is the $R$-linear map corresponding to the actual multiplication map $A \times A \rightarrow A.$ Indeed, we have $1 \otimes f - f \otimes 1 \in I$ because $$1 \otimes f - f \otimes 1 \mapsto f - f = 0$$ under the multiplication map.

Since $I$ kills $I/I^{2},$ we see that $I/I^{2}$ is a module over $(A \otimes_{R} A)/I.$ Since the map $A \otimes_{R} A \rightarrow A$ is surjective, we have $(A \otimes_{R} A)/I \simeq A.$ This makes $I/I^{2}$ an $A$-module. Note that $a \otimes 1 = 1 \otimes a$ in $(A \otimes_{R} A)/I$ both corresponding to $a \in A.$ Hence, we have $$a \cdot (1 \otimes f - f \otimes 1) = a \otimes f - (af) \otimes 1 =  1 \otimes (af) - f \otimes a$$ in $I/I^{2},$ as we admittedly omitted bar notations. The map $d : A \rightarrow I/I^{2}$ is an $R$-module map because $$\begin{align*}d(rf) &= 1 \otimes (rf) - (rf) \otimes 1 \\ &= r \otimes f - (rf) \otimes 1 \\ &= r \cdot (1 \otimes f) - r \cdot (f \otimes 1) \\ &= r \cdot (1 \otimes f - f \otimes 1) \\ &= r \cdot df\end{align*}$$ in $I/I^{2}.$

Moreover, we note that $d : A \rightarrow I/I^{2}$ is an $R$-derivation as we can check $$\begin{align*}d(fg) &= 1 \otimes (fg) - (fg) \otimes 1 \\ &= (1 \otimes f)(1 \otimes g) - (f \otimes 1)(g \otimes 1) \\ &= (1 \otimes f)(1 \otimes g) - (1 \otimes f)(g \otimes 1) + (g \otimes 1)(1 \otimes f) - (g \otimes 1)(f \otimes 1) \\ &= f \cdot (1 \otimes g -  g \otimes 1) + g \cdot (1 \otimes f - f \otimes 1) \\ &= f \cdot d(g) + g \cdot d(f), \end{align*}$$ in $I/I^{2},$ where again, we omitted many bars in the middle.

Theorem. The map $d : A \rightarrow I/I^{2}$ also describes the exterior derivative $d_{A/R} : A \rightarrow \Omega^{1}_{A/R}.$

Proof. We may assume $A = R[x_{i}]_{i \in S}/(f_{j})_{j \in T}.$ Then $$A \otimes_{R} A \simeq \frac{R[x_{i}, y_{i}]_{i \in S}}{(f_{j}(\boldsymbol{x}), f_{j}(\boldsymbol{y}))_{j \in T}} = \frac{R[x_{i}, \Delta_{i}]_{i \in S}}{(f_{j}(\boldsymbol{x}), f_{j}(\boldsymbol{x} + \boldsymbol{\Delta}))_{j \in T}},$$ where we used the following change of variables: $\Delta_{i} = y_{i} - x_{i}.$ In this presentation, the multiplication map $A \otimes_{R} A \rightarrow A$ is given by the map $$\frac{R[x_{i}, \Delta_{i}]_{i \in S}}{(f_{j}(\boldsymbol{x}), f_{j}(\boldsymbol{x} + \boldsymbol{\Delta}))_{j \in T}} \rightarrow \frac{R[x_{i}]_{i \in S}}{(f_{j}(\boldsymbol{x}))_{j \in T}}$$ by $x_{i} \mapsto x_{i}$ and $\Delta_{i} \mapsto 0.$ Hence, by inspection, we can see that the kernel $I$ can be written as $$I = (\overline{\boldsymbol{\Delta}}) = \frac{(\boldsymbol{\Delta}, f_{j}(\boldsymbol{x}), f_{j}(\boldsymbol{x} + \boldsymbol{\Delta}))_{j \in T}}{(f_{j}(\boldsymbol{x}), f_{j}(\boldsymbol{x} + \boldsymbol{\Delta}))_{j \in T}},$$ so $$I/I^{2} \simeq \frac{(\boldsymbol{\Delta}, f_{j}(\boldsymbol{x}), f_{j}(\boldsymbol{x} + \boldsymbol{\Delta}))_{j \in T}}{({\boldsymbol{\Delta}}^{2}, f_{j}(\boldsymbol{x}), f_{j}(\boldsymbol{x} + \boldsymbol{\Delta}))_{j \in T}}.$$ Now, note that for any $f(\boldsymbol{x}) \in R[x_{i}]_{i \in S},$ we have $$f(\boldsymbol{x} + \boldsymbol{\Delta}) = f(\boldsymbol{x}) + \sum_{i \in S}\frac{\partial f(\boldsymbol{x})}{\partial x_{i}} \Delta_{i} + \sum_{i, i' \in S}\Delta_{i}\Delta_{i'}g_{i,i'}(\boldsymbol{x}, \boldsymbol{\Delta})$$ in $R[x_{i}, \Delta_{i}]_{i \in S}$ for suitable $g_{i,i'}(\boldsymbol{x}, \boldsymbol{\Delta}),$ which are in fact zero for all but finitely many $(i,i') \in S^{2}.$ In particular, this implies that $$({\boldsymbol{\Delta}}^{2}, f_{j}(\boldsymbol{x}), f_{j}(\boldsymbol{x} + \boldsymbol{\Delta}))_{j \in T} = \left({\boldsymbol{\Delta}}^{2}, f_{j}(\boldsymbol{x}), \sum_{i \in I} \frac{\partial f_{j}(\boldsymbol{x})}{\partial x_{i}} \Delta_{i}\right)_{j \in T}.$$ Write $B = R[x_{i}]_{i \in S} = R[\boldsymbol{x}]$ and $\mathfrak{b} = (f_{j}(\boldsymbol{x}))_{j \in T}.$ Then we can consider the surjective $B$-linear map $$\Omega^{1}_{B/R} = \bigoplus_{i \in I} B dx_{i} \twoheadrightarrow I/I^{2}$$ defined by $dx_{i} \mapsto \overline{\Delta_{i}}.$

It's time to compute its kernel. Consider a general element $$\sum_{i \in I}g_{i}(\boldsymbol{x}) dx_{i} \mapsto 0$$ under this map. This implies that $$\sum_{i \in I}g_{i}(\boldsymbol{x}) \Delta_{i} \in \left({\boldsymbol{\Delta}}^{2}, f_{j}(\boldsymbol{x}), \sum_{i \in S} \frac{\partial f_{j}(\boldsymbol{x})}{\partial x_{i}} \Delta_{i}\right)_{j \in T}$$ in $R[\boldsymbol{x}, \boldsymbol{\Delta}],$ so we may write $$\sum_{i \in S}g_{i}(\boldsymbol{x}) \Delta_{i} = \sum_{i,i' \in S}h_{i,i'}(\boldsymbol{x}, \boldsymbol{\Delta}) \Delta_{i}\Delta_{i'} + \sum_{j \in S}h_{j}(\boldsymbol{x}, \boldsymbol{\Delta})f_{j}(\boldsymbol{x}) + \sum_{j \in S}\sum_{i \in S} \frac{\partial f_{j}(\boldsymbol{x})}{\partial x_{i}} \Delta_{i},$$ so $$g_{i}(\boldsymbol{x}) = \frac{\partial f_{j}(\boldsymbol{x})}{\partial x_{i}} + \text{ some element in } (f_{j}(\boldsymbol{x}))_{j \in T} = \mathfrak{b}$$ in $R[\boldsymbol{x}] = B.$ Conversely, for any such $g_{i}(\boldsymbol{x}),$ the sum $\sum_{i \in S}g_{i}(\boldsymbol{x}) dx_{i}$ is in the kernel. This implies that the kernel is precisely $$\mathfrak{b}\Omega_{B/R} + Bd_{B/R}\mathfrak{b},$$ so we now have the $B$-lienar isomorphism $$\Omega^{1}_{A/R} = \frac{\Omega^{1}_{B/R}}{\mathfrak{b}\Omega_{B/R} + Bd_{B/R}\mathfrak{b}} \simeq I/I^{2}.$$ We note that the $R$-linear derivation $A = R[\boldsymbol{x}]/(f_{j}(\boldsymbol{x}))_{j \in T} \rightarrow I/I^{2}$ is given by $g \mapsto 1 \otimes g - g \otimes 1$ can be explicitly described as $$g(\boldsymbol{x}) \mapsto g(\boldsymbol{x} + \boldsymbol{\Delta}) - g(\boldsymbol{x}) = \sum_{i \in I}\frac{\partial g(\boldsymbol{x})}{\partial x_{i}} \Delta_{i},$$ which shows that it must be the exterior derivative. $\Box$

$\mathbb{Z}_{p}[t]/(P(t))$ is a DVR if $P(t)$ is irreducible in $\mathbb{F}_{p}[t]$

Let $p$ be a prime and $P(t) \in \mathbb{Z}_{p}[t]$ a monic polynomial whose image in $\mathbb{F}_{p}$ modulo $p$ (which we also denote by $...