2. Unrestricted MP2 一阶梯度与中间量

2.1. 准备工作

[1]:
%load_ext autoreload
%autoreload 2
%matplotlib notebook

from matplotlib import pyplot as plt
import numpy as np
from pyscf import gto, scf, grad, mp
from pyscf.scf import ucphf
from functools import partial
from pyxdh.DerivOnce import GradUMP2
from pyxdh.Utilities import NucCoordDerivGenerator, NumericDiff
import warnings

np.einsum = partial(np.einsum, optimize=["greedy", 1024 ** 3 * 2 / 8])
np.allclose = partial(np.allclose, atol=1e-6, rtol=1e-4)
np.set_printoptions(5, linewidth=180, suppress=True)
warnings.filterwarnings("ignore")
[2]:
mol = gto.Mole()
mol.atom = """
C  0. 0. 0.
H  1. 0. 0.
H  0. 2. 0.
H  0. 0. 1.5
"""
mol.basis = "6-31G"
mol.spin = 1
mol.verbose = 0
mol.build()
[2]:
<pyscf.gto.mole.Mole at 0x7fdec1f09160>
[3]:
scf_eng = scf.UHF(mol).run()
scf_eng.e_tot
[3]:
-39.315520907160426

下面我们用 PySCF 计算一些 MP2 一阶梯度中的一些中间结论。首先是 MP2 的相关能:

[4]:
mp2_eng = mp.UMP2(scf_eng).run()
mp2_eng.e_corr
[4]:
-0.06954272279822271

MP2 的梯度可以求取如下:

[5]:
mp2_grad = grad.ump2.Gradients(mp2_eng).run()
mp2_grad.de
[5]:
array([[ 0.07528, -0.04775, -0.1069 ],
       [-0.09696,  0.01381,  0.02217],
       [ 0.0069 ,  0.02597,  0.00598],
       [ 0.01478,  0.00797,  0.07876]])

上述梯度中,相关能的梯度贡献可以通过下述方式求取得到:

[6]:
scf_grad = grad.uhf.Gradients(scf_eng).run()
mp2_grad.de - scf_grad.de
[6]:
array([[ 0.01346, -0.01382,  0.01127],
       [-0.01264,  0.00222, -0.00289],
       [ 0.00111,  0.01063,  0.00073],
       [-0.00193,  0.00097, -0.0091 ]])

所有与 SCF 一阶梯度有关的量定义如下:

[7]:
gradh = GradUMP2({"scf_eng": scf_eng, "cphf_tol": 1e-10})
gradh.eng
[7]:
-39.38506362995865
[8]:
nmo, nao, natm = gradh.nmo, gradh.nao, gradh.natm
nocc, nvir = gradh.nocc, gradh.nvir
so, sv, sa = gradh.so, gradh.sv, gradh.sa
C, e = gradh.C, gradh.e
Co, eo = gradh.Co, gradh.eo
Cv, ev = gradh.Cv, gradh.ev
mo_occ = gradh.mo_occ
[9]:
H_0_ao, H_0_mo, H_1_ao, H_1_mo = gradh.H_0_ao, gradh.H_0_mo, gradh.H_1_ao, gradh.H_1_mo
S_0_ao, S_0_mo, S_1_ao, S_1_mo = gradh.S_0_ao, gradh.S_0_mo, gradh.S_1_ao, gradh.S_1_mo
F_0_ao, F_0_mo, F_1_ao, F_1_mo = gradh.F_0_ao, gradh.F_0_mo, gradh.F_1_ao, gradh.F_1_mo
eri0_ao, eri0_mo, eri1_ao, eri1_mo = gradh.eri0_ao, gradh.eri0_mo, gradh.eri1_ao, gradh.eri1_mo
[10]:
Ax0_Core = gradh.Ax0_Core

2.2. MP2 能量计算中间张量

  • D_iajb \(D_{ij}^{ab, \sigma \sigma'}\), dim: \((\sigma \sigma', i, a, j, b)\), type: Tuple[np.ndarray]

\[D_{ij}^{ab, \sigma \sigma'} = \varepsilon_i^\sigma - \varepsilon_a^\sigma + \varepsilon_j^{\sigma'} - \varepsilon_b^{\sigma'}\]

其中,\(i, a\)\(\sigma\) 自旋,\(j, b\)\(\sigma'\) 自旋;后同。

[11]:
D_iajb = gradh.D_iajb
[t.shape for t in D_iajb]
[11]:
[(5, 10, 5, 10), (5, 10, 4, 11), (4, 11, 4, 11)]
[12]:
D_iajb_ = (
    eo[0][:, None, None, None] - ev[0][None, :, None, None] + eo[0][None, None, :, None] - ev[0][None, None, None, :],
    eo[0][:, None, None, None] - ev[0][None, :, None, None] + eo[1][None, None, :, None] - ev[1][None, None, None, :],
    eo[1][:, None, None, None] - ev[1][None, :, None, None] + eo[1][None, None, :, None] - ev[1][None, None, None, :]
)
[np.allclose(t_, t) for t_, t in zip(D_iajb_, D_iajb)]
[12]:
[True, True, True]
  • t_iajb \(t_{ij}^{ab, \sigma \sigma'}\), dim: \((\sigma \sigma', i, a, j, b)\), type: Tuple[np.ndarray]

\[t_{ij}^{ab, \sigma \sigma'} = \frac{(ia|jb)^{\sigma \sigma'}}{D_{ij}^{ab, \sigma \sigma'}}\]
[13]:
t_iajb = gradh.t_iajb
[t.shape for t in t_iajb]
[13]:
[(5, 10, 5, 10), (5, 10, 4, 11), (4, 11, 4, 11)]
[14]:
t_iajb_ = (
    eri0_mo[0][so[0], sv[0], so[0], sv[0]] / D_iajb[0],
    eri0_mo[1][so[0], sv[0], so[1], sv[1]] / D_iajb[1],
    eri0_mo[2][so[1], sv[1], so[1], sv[1]] / D_iajb[2]
)
[np.allclose(t_, t) for t_, t in zip(t_iajb_, t_iajb)]
[14]:
[True, True, True]
  • T_iajb \(T_{ij}^{ab, \sigma \sigma'}\), dim: \((\sigma \sigma', i, a, j, b)\), type: Tuple[np.ndarray]

\[\begin{split}\begin{align} T_{ij}^{ab, \alpha \alpha} &= c_\mathrm{c} c_\mathrm{SS} \frac{1}{2} (t_{ij}^{ab, \alpha \alpha} - t_{ij}^{ba, \alpha \alpha}) \\ T_{ij}^{ab, \alpha \beta} &= c_\mathrm{c} c_\mathrm{OS} t_{ij}^{ab, \alpha \beta} \end{align}\end{split}\]

对于 \(T_{ij}^{ab, \beta \beta}\),其情况可以通过将 RHS \(\alpha\)\(\beta\) 互换得到。在普通的 MP2 中,

\[c_\mathrm{c} = c_\mathrm{SS} = c_\mathrm{OS} = 1\]

因此我们可以在尝试 MP2 的实现时可以简化一些代码。但对于 XYGJ-OS 而言,

\[c_\mathrm{c} = 0.4364, \quad c_\mathrm{SS} = 0, \quad c_\mathrm{OS} = 1\]

因此在尝试这些泛函时,需要少许改变下述的代码。

[15]:
T_iajb = gradh.T_iajb
[t.shape for t in T_iajb]
[15]:
[(5, 10, 5, 10), (5, 10, 4, 11), (4, 11, 4, 11)]
[16]:
T_iajb_ = (
    0.5 * (t_iajb[0] - t_iajb[0].swapaxes(-1, -3)),
    t_iajb[1],
    0.5 * (t_iajb[2] - t_iajb[2].swapaxes(-1, -3))
)
[np.allclose(t_, t) for t_, t in zip(T_iajb_, T_iajb)]
[16]:
[True, True, True]

最后,我们可以复现 MP2 的能量:

\[E_\mathrm{corr} = T_{ij}^{ab, \sigma \sigma'} t_{ij}^{ab, \sigma \sigma'} D_{ij}^{ab, \sigma \sigma'}\]
[17]:
np.array([(T_iajb[i] * t_iajb[i] * D_iajb[i]).sum() for i in range(3)]).sum()
[17]:
-0.06954272279822277
[18]:
mp2_eng.e_corr
[18]:
-0.06954272279822271

2.3. MP2 梯度中间张量

2.3.1. \(W_{pq}^{\sigma, \mathrm{MP2}}[\mathrm{I}]\)

  • W_I \(W_{pq}^{\sigma, \mathrm{MP2}}[\mathrm{I}]\), dim: \((\sigma, p, q)\);但需要对 \(\alpha, \beta\) 两种自旋分开生成

[19]:
W_I = np.zeros((2, nmo, nmo))
\[W_{ij}^{\alpha, \mathrm{MP2}} [\mathrm{I}] = - 2 T_{ik}^{ab, \alpha \alpha} (ja|kb)^{\alpha \alpha} - T_{ik}^{ab, \alpha \beta} (ja|kb)^{\alpha \beta}\]

\(\beta\) 自旋的情况,交换上式 \(\alpha, \beta\) 即可。

[20]:
W_I[0, so[0], so[0]] = (
    - 2 * np.einsum("iakb, jakb -> ij", T_iajb[0], t_iajb[0] * D_iajb[0])
    -     np.einsum("iakb, jakb -> ij", T_iajb[1], t_iajb[1] * D_iajb[1]))
W_I[1, so[1], so[1]] = (
    - 2 * np.einsum("iakb, jakb -> ij", T_iajb[2], t_iajb[2] * D_iajb[2])
    -     np.einsum("kbia, kbja -> ij", T_iajb[1], t_iajb[1] * D_iajb[1]))
\[W_{ab}^{\alpha, \mathrm{PT2}} [\mathrm{I}] = - 2 T_{ij}^{ac, \alpha \alpha} (ib|jc)^{\alpha \alpha} - T_{ij}^{ac, \alpha \beta} (ib|jc)^{\alpha \beta}\]
[21]:
W_I[0, sv[0], sv[0]] = (
    - 2 * np.einsum("iajc, ibjc -> ab", T_iajb[0], t_iajb[0] * D_iajb[0])
    -     np.einsum("iajc, ibjc -> ab", T_iajb[1], t_iajb[1] * D_iajb[1]))
W_I[1, sv[1], sv[1]] = (
    - 2 * np.einsum("iajc, ibjc -> ab", T_iajb[2], t_iajb[2] * D_iajb[2])
    -     np.einsum("jcia, jcib -> ab", T_iajb[1], t_iajb[1] * D_iajb[1]))
\[W_{ai}^{\alpha, \mathrm{PT2}} [\mathrm{I}] = - 4 T_{jk}^{ab, \alpha \alpha} (ij|bk)^{\alpha \alpha} - 2 T_{jk}^{ab, \alpha \beta} (ij|bk)^{\alpha \beta}\]
[22]:
W_I[0, sv[0], so[0]] = (
    - 4 * np.einsum("jakb, ijbk -> ai", T_iajb[0], eri0_mo[0][so[0], so[0], sv[0], so[0]])
    - 2 * np.einsum("jakb, ijbk -> ai", T_iajb[1], eri0_mo[1][so[0], so[0], sv[1], so[1]]))
W_I[1, sv[1], so[1]] = (
    - 4 * np.einsum("jakb, ijbk -> ai", T_iajb[2], eri0_mo[2][so[1], so[1], sv[1], so[1]])
    - 2 * np.einsum("kbja, bkij -> ai", T_iajb[1], eri0_mo[1][sv[0], so[0], so[1], so[1]]))

可能需要留意,由于我们的程序中只有 eri0_mo[1] \((pq|rs)^{\alpha \beta}\) 而没有 \((pq|rs)^{\beta \alpha}\),因此在程序中若要对后者作张量缩并,首先需要转置为前者的情况,或者在 np.einsum 中调整缩并角标的顺序。

2.3.2. \(D_{ij}^{\sigma, \mathrm{MP2}}\), \(D_{ab}^{\sigma, \mathrm{MP2}}\)

  • D_r_oovv \(D_{pq}^{\sigma, \mathrm{MP2}}\), only occ-occ and vir-vir part, dim: \((\sigma, p, q)\)

[23]:
D_r_oovv = np.zeros((2, nmo, nmo))
\[D_{ij}^{\alpha, \text{MP2}} = - 2 T_{ik}^{ab, \alpha \alpha} t_{jk}^{ab, \alpha \alpha} - T_{ik}^{ab, \alpha \beta} t_{jk}^{ab, \alpha \beta}\]
[24]:
D_r_oovv[0, so[0], so[0]] = (
    - 2 * np.einsum("iakb, jakb -> ij", T_iajb[0], t_iajb[0])
    -     np.einsum("iakb, jakb -> ij", T_iajb[1], t_iajb[1]))
D_r_oovv[1, so[1], so[1]] = (
    - 2 * np.einsum("iakb, jakb -> ij", T_iajb[2], t_iajb[2])
    -     np.einsum("kbia, kbja -> ij", T_iajb[1], t_iajb[1]))
\[D_{ab}^{\alpha, \text{MP2}} = 2 T_{ij}^{ac, \alpha \alpha} t_{ij}^{bc, \alpha \alpha} + T_{ij}^{ac, \alpha \beta} t_{ij}^{bc, \alpha \beta}\]
[25]:
D_r_oovv[0, sv[0], sv[0]] = (
    + 2 * np.einsum("iajc, ibjc -> ab", T_iajb[0], t_iajb[0])
    +     np.einsum("iajc, ibjc -> ab", T_iajb[1], t_iajb[1]))
D_r_oovv[1, sv[1], sv[1]] = (
    + 2 * np.einsum("iajc, ibjc -> ab", T_iajb[2], t_iajb[2])
    +     np.einsum("jcia, jcib -> ab", T_iajb[1], t_iajb[1]))

2.3.3. \(L_{ai}^\sigma\)

  • L \(L_{ai}^\sigma\), dim: \((\sigma, a, i)\), type: Tuple[np.ndarray]

\[L_{ai}^\alpha = \mathtt{Ax}_{ai}^\alpha [D_{kl}^{\sigma, \mathrm{MP2}}] + \mathtt{Ax}_{ai}^\alpha [D_{bc}^{\sigma, \mathrm{MP2}}] - 4 T_{jk}^{ab, \alpha \alpha} (ij|bk)^{\alpha \alpha} - 4 T_{jk}^{ab, \alpha \beta} (ij|bk)^{\alpha \beta} + 4 T_{ij}^{bc, \alpha \alpha} (ab|jc)^{\alpha \alpha} + 4 T_{ij}^{bc, \alpha \beta} (ab|jc)^{\alpha \beta}\]
[26]:
L = Ax0_Core(sv, so, sa, sa)(D_r_oovv)
[27]:
L[0][:] += (
    - 4 * np.einsum("jakb, ijbk -> ai", T_iajb[0], eri0_mo[0][so[0], so[0], sv[0], so[0]])
    - 2 * np.einsum("jakb, ijbk -> ai", T_iajb[1], eri0_mo[1][so[0], so[0], sv[1], so[1]])
    + 4 * np.einsum("ibjc, abjc -> ai", T_iajb[0], eri0_mo[0][sv[0], sv[0], so[0], sv[0]])
    + 2 * np.einsum("ibjc, abjc -> ai", T_iajb[1], eri0_mo[1][sv[0], sv[0], so[1], sv[1]])
)
[28]:
L[1][:] += (
    - 4 * np.einsum("jakb, ijbk -> ai", T_iajb[2], eri0_mo[2][so[1], so[1], sv[1], so[1]])
    - 2 * np.einsum("kbja, bkij -> ai", T_iajb[1], eri0_mo[1][sv[0], so[0], so[1], so[1]])
    + 4 * np.einsum("ibjc, abjc -> ai", T_iajb[2], eri0_mo[2][sv[1], sv[1], so[1], sv[1]])
    + 2 * np.einsum("jcib, jcab -> ai", T_iajb[1], eri0_mo[1][so[0], sv[0], sv[1], sv[1]])
)

2.3.4. \(D_{ai}^{\sigma, \mathrm{MP2}}\)

  • D_r_vo \(D_{ij}^{\sigma, \mathrm{MP2}}\), dim: \((\sigma, a, i)\), type: Type[np.ndarray]

\[- (\varepsilon_a^\sigma - \varepsilon_i^\sigma) D_{ai}^{\sigma, \mathrm{MP2}} - \mathtt{Ax}_{ai}^\sigma [D_{bj}^{\sigma' \mathrm{MP2}}] = L_{ai}^\sigma\]
[29]:
def fx(X):
    X_alpha = X[:, :nocc[0] * nvir[0]].reshape((nvir[0], nocc[0]))
    X_beta = X[:, nocc[0] * nvir[0]:].reshape((nvir[1], nocc[1]))
    Ax = Ax0_Core(sv, so, sv, so, in_cphf=True)((X_alpha, X_beta))
    result = np.concatenate([Ax[0].reshape(-1), Ax[1].reshape(-1)])
    return result

D_r_vo = ucphf.solve(fx, e, mo_occ, L, max_cycle=100, tol=1e-10)[0]

2.3.5. \(D_{pq}^{\sigma, \mathrm{MP2}}\)

  • D_r \(D_{pq}^{\sigma, \mathrm{MP2}}\), dim: \((\sigma, p, q)\)

[30]:
D_r = np.copy(D_r_oovv)
D_r[0][sv[0], so[0]] = D_r_vo[0]
D_r[1][sv[1], so[1]] = D_r_vo[1]

2.4. MP2 一阶梯度

2.4.1. 弛豫密度贡献

\[\partial_{\mathbb{A}} E_\mathrm{corr} \leftarrow D_{pq}^{\sigma, \mathrm{MP2}} B_{pq}^{\mathbb{A}, \sigma}\]
[31]:
E_1_MP2_contrib1 = np.einsum("xpq, xApq -> A", D_r, gradh.B_1).reshape((natm, 3))
E_1_MP2_contrib1
[31]:
array([[ 0.00819, -0.01257,  0.00115],
       [-0.00897,  0.0027 , -0.00033],
       [ 0.00095,  0.00815,  0.00089],
       [-0.00017,  0.00171, -0.00171]])
\[\partial_{\mathbb{A}} E_\mathrm{corr} \leftarrow W_{pq}^{\sigma} [\mathrm{I}] S_{pq}^{\mathbb{A}, \sigma}\]
[32]:
E_1_MP2_contrib2 = np.einsum("xpq, xApq -> A", W_I, S_1_mo).reshape((natm, 3))
E_1_MP2_contrib2
[32]:
array([[-0.00464, -0.00136,  0.00005],
       [ 0.00391, -0.0006 , -0.00406],
       [ 0.00052,  0.00219, -0.00011],
       [ 0.00021, -0.00023,  0.00413]])
\[\partial_{\mathbb{A}} E_\mathrm{corr} \leftarrow T_{ij}^{ab, \sigma \sigma'} (pq|rs)^{\sigma \sigma'}\]
[33]:
E_1_MP2_contrib3 = (
    + 2 * np.einsum("iajb, Aiajb -> A", T_iajb[0], eri1_mo[0][:, so[0], sv[0], so[0], sv[0]])
    + 2 * np.einsum("iajb, Aiajb -> A", T_iajb[1], eri1_mo[1][:, so[0], sv[0], so[1], sv[1]])
    + 2 * np.einsum("iajb, Aiajb -> A", T_iajb[2], eri1_mo[2][:, so[1], sv[1], so[1], sv[1]])
).reshape((natm, 3))
E_1_MP2_contrib3
[33]:
array([[ 0.00991,  0.0001 ,  0.01007],
       [-0.00758,  0.00012,  0.0015 ],
       [-0.00037,  0.00029, -0.00005],
       [-0.00197, -0.00051, -0.01152]])

总能量梯度可以表示为

[34]:
E_1_MP2_contrib1 + E_1_MP2_contrib2 + E_1_MP2_contrib3
[34]:
array([[ 0.01346, -0.01382,  0.01127],
       [-0.01264,  0.00222, -0.00289],
       [ 0.00111,  0.01063,  0.00073],
       [-0.00193,  0.00097, -0.0091 ]])

我们可以与 PySCF 的梯度作对比:

[35]:
mp2_grad.de - scf_grad.de
[35]:
array([[ 0.01346, -0.01382,  0.01127],
       [-0.01264,  0.00222, -0.00289],
       [ 0.00111,  0.01063,  0.00073],
       [-0.00193,  0.00097, -0.0091 ]])