Shape optimization
Optimization
We now consider problems of finding the shape of singular points from shape sensitivity, that is, for a given cost function $F$, find a mapping $\varphi ^O\in W^{1,\infty }(\Omega ;\mathbb{R}^d)$ such that $F(u^{\varphi ^O}) \lt F(u)$ where $u$ and $u^{\varphi ^O}$ are solutions of problems (2.1) and (2.25a) with $\varphi _t=x+t\mu ^O, \varphi ^O=\varphi _{\epsilon ^O}$ with $\mu ^O\in W^{1,\infty }(\Omega ;\mathbb{R}^d)$ and a number $\epsilon ^O \gt 0$. In our shape optimization there is the case that $\varphi ^O(\Omega )=\Omega $ such as mixed boundary conditions ($\varphi ^O(\Gamma _D)\neq \Gamma _D$). Expanding the cost function $F$ of $u^{\varphi _t}, \varphi _t=x+t\mu , \mu \in W^{1,\infty }(\Omega ;\mathbb{R}^d)$ with respect to $t$, we derive $$ F(u^{\varphi _t})=F(u)+tdF(u)[\mu ]+o(t), dF(u)[\mu ] =\left .\frac{d}{dt}F(u^{\varphi _t})\right |_{t=0}$$ In the usual shape shape derivative $dF(u)[\mu ]$ is represented ordinary \begin{equation} dF(u)[\mu ]=\int _{\partial \Omega }g_{\Gamma }(\mu \cdot n)\, ds \tag{3.13} \end{equation} with an integrable function $g_{\Gamma }$ defined on $\Gamma $ if the boundary $\partial \Omega $ is smooth [Theorem 2.27, Sok92]. The advantage of the boundary expression (3.13) is that a descent direction $-g_{\Gamma }n$ is readily available. However, numerical instability phenomenon appears by the method of directly moving the node on the boundary $\partial \Omega $ using the shape gradient $-g_{\Gamma }n$. There are two ways to eliminate such instability; one is to consider it as a problem of numerical stability with a finite dimensional design space, called parametric method (see e.g.[Sa99]). Another is to find general shapes from the sensitivity of a shape function with respect to an arbitrary perturbation of these shapes, called nonparametric method where (before discretization) the design space is infinite-dimensional. In Japan there is the famous nonparametric method called the H1-gradient method (originally called the ``traction method'') [Az94, A-W96, Az20]. Find $\mu ^O$ by solving the additional auxiliary variational problem \begin{equation} b(\mu ^O,\eta ) = -dF(u)[\eta ] ~~ ∀\eta \in M(\Omega ) \tag{3.14} \end{equation} where $b(\cdot ,\cdot )$ is a coersive bilinear form \begin{equation} b(\eta ,\eta )\ge \alpha _b\|\eta \|_{1,\Omega }^2 ~~ ∀\eta \in M(\Omega ) \tag{3.15} \end{equation} with a constant $\alpha _b \gt 0$, and $M(\Omega )$ is a suitable subspace of $H^1(\Omega ;\mathbb{R}^d)$. Azegami [Az94, A-W96] chose for the bilinear form $b(\cdot ,\cdot )$ from linear elasticity. There is similar nonparametric method in French researcher, for example, Allaire[A-P06, Al07] use the bilinear form $b(\mu ,\eta ) =\int _{\Omega }\{\nabla \mu :\nabla \eta +\mu \cdot \eta \} dx$ to find shape optimization, and we also call it H1-gradient method here. By replacing $dF(u)[\mu ]$ in (3.14) with the shape sensitivities by GJ-integral, we have for any $\eta \in M(\Omega )$ when the conditions in Corollary 2.13 are satisfied, \begin{equation} b(\mu ^O,\eta )= \left \lbrace \array{R_{\Omega }(u,\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds& \textrm{(energy)}\\ -2\left \{R_{\Omega }(u,\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds\right \} & \textrm{(mean compliance)}\\ -\delta R_{\Omega }(u,\eta )[u_g]&\\ -\int _{\partial \Omega } (fu_g+\widehat{g}(u)-\nabla _z\widehat{g}(u)u)(\eta \cdot n)ds\\ (F(u)=\int _{\Omega }\widehat{g}(u)dx)&\\ }\right . \tag{3.16} \end{equation} where $u_g$ is the solution of the problem (3.8) . Here we notice that the right-hand side of (3.16) take the finite value for all weak solution of (2.1) . The solution $\mu ^O$ of (3.16) is unique. Putting $\varphi ^O_t=x+t\mu ^O$, we have \begin{eqnarray} F(u^{\varphi ^O_t})&=&F(u)+\left .t\frac{d}{dt}F(u^{\varphi ^O_t})\right |_{t=0}+o(t)\\ &=&F(u)-tb(\mu ^O,\mu ^O)+o(t)\notag \\ &\le &F(u)-t\alpha _b\|\mu ^O\|_{1,2,\Omega }^2+o(t)\notag \tag{3.17} \end{eqnarray} If $\mu ^O\neq 0$, we can take an appropriate number $\epsilon ^O$ such that $F(u^{\varphi _t^O}) \lt F(u), 0\le t\le \epsilon ^O$, so that $\varphi ^O=x+\epsilon ^O\mu ^O$ gives the optimization of the singular points with respect to the cost function $F$. We already indicated that $[\eta \mapsto R_{\Omega }(u,\eta )]$ is continuous on $W^{1,\infty }(\Omega ,\mathbb{R}^d)$, but in the H1-gradient method $[\eta \mapsto R_{\Omega }(u,\eta )]$ needs to be continuous on $H^1(\Omega ;\mathbb{R}^d)$. If there is some regularity such as $u\in H^s(\Omega ;\mathbb{R}^m),s \gt 1$, then we will extend $[\eta \mapsto R_{\Omega }(u,\eta )]$ to the continious functional on $H^1(\Omega ;\mathbb{R}^m)$. In numerical calculation of (3.16) , for example FEM, (3.16) is well-defined if $[u\mapsto R_{\Omega }(u_h,\mu )]$ give the good approximation of $[u\mapsto R_{\Omega }(u,\mu )]$ for a FE-approximation $u_h$ of $u$. In the interface problem (2.51) , H1-gradient method becomes as follows when the conditions in Corollary 2.18 are satisfied: \begin{equation} b(\mu ^O,\eta )= \left \lbrace \array{\sum _{\kappa =1}^K R_{\Omega _{\kappa }}(u,\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds& \textrm{(energy)}\\ -2\left \{\sum _{\kappa =1}^KR_{\Omega _{\kappa }}(u_{\kappa },\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds\right \} &\textrm{(mean compliance)}\\ -\sum _{\kappa =1}^K\delta R_{\Omega _{\kappa }}(u,\eta )[u_g]&\\ -\int _{\partial \Omega } (fu_g+\widehat{g}(u)-\nabla _z\widehat{g}(u)u)(\eta \cdot n)ds &(F=\int _{\Omega }\widehat{g}(u)dx) }\right . \tag{3.18} \end{equation} The H1-gradient of eigenvalue, when satisfying the conditions in Theorem 3.3 , is \begin{equation} b(\mu ^O,\eta )=2R_{\Omega }^E(u_{\lambda },\eta )+\lambda \int _{\partial \Omega }u_{\lambda }^2(\eta \cdot n)ds \tag{3.19} \end{equation} For the shape optimization of energy and compliance under Stokes problem, H1-gradient become \begin{equation} b(\mu ^O,\eta )= \left \lbrace \array{R_{\Omega }^S((u,p),\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds& \textrm{(energy)}\\ -2\left \{R_{\Omega }^S((u,p),\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds\right \} & \textrm{(mean compliance)} }\right . \tag{3.20} \end{equation}Information about the page: The current position is painted circle in the diagram below. Blue is the main MaKR and orange is a duplicate for MaKR's public use, where dashed line means the connection to the private area The dashed lines are only connections to main MaKR.