Code
viewof rounding_mode = Inputs.radio(["Direct rounding", "Greedy rounding"], {value: "Direct rounding", label: "Rounding strategy"})---
title: "Mixed-Integer Quadrangulation"
subtitle: "Global Parameterization via Greedy Rounding"
---
## Paper at a Glance
::: {.callout-note}
## Reference
**Mixed-Integer Quadrangulation** — Bommes, Zimmer, Kobbelt (2009)
ACM Transactions on Graphics (SIGGRAPH), Vol. 28, No. 3, Article 77.
[Download the original paper (PDF)](assets/bommes2009_miq.pdf)
:::
This paper presents a complete pipeline for automatic quad meshing: starting from sparse directional constraints, it computes a smooth cross field, then a globally smooth parameterization whose integer iso-lines follow the cross field. Both stages are formulated as mixed-integer problems solved by an **adaptive greedy solver** — rounding integer variables one at a time with local Gauss-Seidel updates. This page focuses on the **global parameterization** (Section 5), which became the foundational "MIQ" method that later work (including [Bommes et al. 2013](integer_grid_maps.qmd)) sought to improve.
---
## 1. Introduction
### 1.1 The Quad Meshing Pipeline
The paper decomposes quad meshing into two stages, each a mixed-integer problem:
1. **Cross field generation** (Sec. 3-4) — Find the smoothest cross field interpolating sparse directional constraints, with automatic singularity placement
2. **Global parameterization** (Sec. 5) — Compute a map from the mesh to a 2D parameter domain whose iso-lines follow the cross field, with singularities at integer locations
> *"In both steps of the algorithm the task can be formulated in terms of a mixed-integer problem. These are linear problems where a subset of the variables is continuous ($\in \mathbb{R}$) and the others are discrete ($\in \mathbb{Z}$)."*
### 1.2 Quality Criteria
The paper identifies five key properties of a good quadrangulation:
1. **Individual element quality** — quads close to squares
2. **Orientation** — edges orthogonal to principal curvature directions
3. **Alignment** — feature lines preserved as mesh edge sequences
4. **Global structure** — singularities (vertices with valence $\neq 4$) placed to capture geometry
5. **Semantics** — application-specific requirements (e.g., simulation needs)
---
## 2. The Greedy Mixed-Integer Solver
Before diving into the parameterization, we need to understand the solver that drives both stages. The key insight: instead of solving the full mixed-integer problem at once (NP-hard), round integer variables **one at a time** in order of least rounding error.
### 2.1 Direct vs. Greedy Rounding
**Direct rounding**: Solve the continuous relaxation (ignore all integer constraints), then round every integer variable to the nearest integer simultaneously. This is fast but can produce large errors due to mutual dependencies between variables.
**Greedy rounding**: Round one variable at a time — the one with the smallest rounding error — and **recompute** the continuous solution after each rounding step. This propagates the effect of each rounding decision before making the next one.
```{ojs}
//| label: fig-rounding-comparison
viewof rounding_mode = Inputs.radio(["Direct rounding", "Greedy rounding"], {value: "Direct rounding", label: "Rounding strategy"})
```
```{ojs}
//| code-fold: true
{
const w = 600, h = 350;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
const isDirect = rounding_mode === "Direct rounding";
// Show a 1D example: 5 variables, some integer-constrained
const nVars = 6;
const contSolution = [0.3, 1.7, 2.1, 3.6, 4.2, 5.8];
const isInteger = [false, true, false, true, false, true]; // vars 1,3,5 are integer
// Direct rounding: round all integers at once
const directRounded = contSolution.map((v, i) => isInteger[i] ? Math.round(v) : v);
// Greedy rounding: round one by one, adjusting neighbors
// Simulate: round var 1 first (1.7 → 2), shift neighbors slightly
const greedyRounded = [...contSolution];
// Round var 5 first (closest to integer: 5.8 → 6, error 0.2)
greedyRounded[5] = 6; greedyRounded[4] = 4.3;
// Round var 1 next (1.7 → 2, error 0.3)
greedyRounded[1] = 2; greedyRounded[0] = 0.4; greedyRounded[2] = 2.0;
// Round var 3 (3.6 → 4, error 0.4)
greedyRounded[3] = 4; greedyRounded[2] = 2.1; greedyRounded[4] = 4.2;
const values = isDirect ? directRounded : greedyRounded;
const margin = {left: 50, right: 30, top: 50, bottom: 60};
const plotW = w - margin.left - margin.right;
const plotH = h - margin.top - margin.bottom;
const g = svg.append("g").attr("transform", `translate(${margin.left},${margin.top})`);
// X axis
const xScale = d3.scaleLinear().domain([0, nVars - 1]).range([0, plotW]);
const yScale = d3.scaleLinear().domain([0, 7]).range([plotH, 0]);
// Integer grid lines
for (let i = 0; i <= 7; i++) {
g.append("line")
.attr("x1", 0).attr("y1", yScale(i)).attr("x2", plotW).attr("y2", yScale(i))
.attr("stroke", "#eee").attr("stroke-width", 1);
g.append("text").attr("x", -10).attr("y", yScale(i) + 4)
.attr("text-anchor", "end").attr("font-size", 11).attr("fill", "#aaa").text(i);
}
// Axis labels
g.append("text").attr("x", plotW / 2).attr("y", plotH + 40)
.attr("text-anchor", "middle").attr("font-size", 12).attr("fill", "#666").text("Variable index");
g.append("text").attr("x", -35).attr("y", plotH / 2)
.attr("text-anchor", "middle").attr("font-size", 12).attr("fill", "#666")
.attr("transform", `rotate(-90, -35, ${plotH/2})`).text("Value");
// Continuous solution (dotted line)
const contLine = d3.line().x((d, i) => xScale(i)).y(d => yScale(d));
g.append("path").attr("d", contLine(contSolution))
.attr("fill", "none").attr("stroke", "#999").attr("stroke-width", 1.5).attr("stroke-dasharray", "4,3");
// Rounded solution (solid line)
g.append("path").attr("d", contLine(values))
.attr("fill", "none").attr("stroke", "steelblue").attr("stroke-width", 2);
// Draw points
for (let i = 0; i < nVars; i++) {
// Continuous value
g.append("circle").attr("cx", xScale(i)).attr("cy", yScale(contSolution[i])).attr("r", 4)
.attr("fill", "none").attr("stroke", "#999").attr("stroke-width", 1.5);
// Rounded value
const color = isInteger[i] ? "#e74c3c" : "steelblue";
g.append("circle").attr("cx", xScale(i)).attr("cy", yScale(values[i])).attr("r", 5)
.attr("fill", color);
// Arrow from continuous to rounded if different
if (Math.abs(values[i] - contSolution[i]) > 0.01) {
g.append("line")
.attr("x1", xScale(i) + 8).attr("y1", yScale(contSolution[i]))
.attr("x2", xScale(i) + 8).attr("y2", yScale(values[i]))
.attr("stroke", color).attr("stroke-width", 1.5).attr("opacity", 0.5);
}
// Variable label
g.append("text").attr("x", xScale(i)).attr("y", plotH + 18)
.attr("text-anchor", "middle").attr("font-size", 11)
.attr("fill", isInteger[i] ? "#e74c3c" : "#666")
.text(isInteger[i] ? `x${i} (int)` : `x${i}`);
}
// Compute total error
let totalError = 0;
for (let i = 0; i < nVars; i++) {
if (isInteger[i]) totalError += Math.abs(values[i] - contSolution[i]);
}
// Legend
svg.append("circle").attr("cx", margin.left + 10).attr("cy", 15).attr("r", 4)
.attr("fill", "none").attr("stroke", "#999").attr("stroke-width", 1.5);
svg.append("text").attr("x", margin.left + 20).attr("y", 19)
.attr("font-size", 11).attr("fill", "#666").text("Continuous solution");
svg.append("circle").attr("cx", margin.left + 160).attr("cy", 15).attr("r", 5).attr("fill", "steelblue");
svg.append("text").attr("x", margin.left + 170).attr("y", 19)
.attr("font-size", 11).attr("fill", "#666").text("After rounding");
svg.append("circle").attr("cx", margin.left + 290).attr("cy", 15).attr("r", 5).attr("fill", "#e74c3c");
svg.append("text").attr("x", margin.left + 300).attr("y", 19)
.attr("font-size", 11).attr("fill", "#666").text("Integer variable");
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", isDirect ? "#e74c3c" : "#27ae60")
.text(isDirect ?
"Direct: round all integers at once (no recomputation between roundings)" :
"Greedy: round one at a time, recompute continuous variables after each"
);
return svg.node();
}
```
### 2.2 The Adaptive Solver
The greedy solver is accelerated by **local Gauss-Seidel updates**: after rounding variable $x_i$, only variables in its dependency neighborhood need updating. The residuum at each variable is:
$$r_k = b_k - \sum_{j=1}^{n} A_{kj} x_j$$
If $|r_k|$ exceeds a tolerance ($10^{-6}$), update $x_k \leftarrow x_k - r_k / A_{kk}$ and push its neighbors onto the queue. This exploits the **sparsity** of the system — rounding a period jump on one edge only affects a local neighborhood.
::: {.callout-tip}
## Why local updates matter
For the cross field, rounding a period jump $p_{ij}$ on one edge only affects the angles of the two adjacent triangles and their neighbors. The local Gauss-Seidel update propagates this change through the mesh without re-solving the entire system. This is what makes rounding tens of thousands of variables tractable.
:::
---
## 3. Smooth Cross Fields
### 3.1 Cross Field Representation
A cross field on a triangle mesh $M = (V, E, F)$ is defined by:
- An **angle field** $\theta : F \to \mathbb{R}$ — one angle per face, giving the cross orientation relative to a local reference edge
- A **period jump field** $p : E \to \mathbb{Z}$ — one integer per edge, resolving the $\pi/2$ ambiguity between neighboring crosses
The four cross directions in triangle $T$ are $\mathbf{d}_k = (\cos(\theta + k\pi/2), \sin(\theta + k\pi/2))$ for $k = 0, 1, 2, 3$, measured relative to the first edge $\mathbf{e}$ of the triangle.
```{ojs}
//| label: fig-cross-field-rep
viewof cf_theta = Inputs.range([0, 89], {value: 30, step: 1, label: "θ (degrees)"})
```
```{ojs}
//| label: fig-period-jump
viewof cf_pij = Inputs.range([0, 3], {value: 0, step: 1, label: "Period jump p_ij"})
```
```{ojs}
//| code-fold: true
{
const w = 600, h = 380;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
// Two adjacent triangles
const triL = [[100, 300], [300, 300], [200, 130]];
const triR = [[300, 300], [500, 300], [400, 130]];
// Draw triangles
svg.append("polygon")
.attr("points", triL.map(v => v.join(",")).join(" "))
.attr("fill", "rgba(70,130,180,0.08)").attr("stroke", "#2c3e50").attr("stroke-width", 2);
svg.append("polygon")
.attr("points", triR.map(v => v.join(",")).join(" "))
.attr("fill", "rgba(230,126,34,0.08)").attr("stroke", "#2c3e50").attr("stroke-width", 2);
// Shared edge highlight
svg.append("line")
.attr("x1", 300).attr("y1", 300).attr("x2", triL[2][0] + (triR[2][0] - triL[2][0]) * 0)
.attr("y2", triL[2][1])
.attr("stroke", "#e74c3c").attr("stroke-width", 3);
// Reference edges (first edge of each triangle)
svg.append("line")
.attr("x1", triL[0][0]).attr("y1", triL[0][1]).attr("x2", triL[1][0]).attr("y2", triL[1][1])
.attr("stroke", "#ccc").attr("stroke-width", 1).attr("stroke-dasharray", "3,3");
svg.append("text").attr("x", 200).attr("y", 318).attr("text-anchor", "middle")
.attr("font-size", 10).attr("fill", "#aaa").text("ref edge e");
// Cross in left triangle
const cxL = (triL[0][0] + triL[1][0] + triL[2][0]) / 3;
const cyL = (triL[0][1] + triL[1][1] + triL[2][1]) / 3;
const theta = cf_theta * Math.PI / 180;
const armR = 35;
for (let k = 0; k < 4; k++) {
const a = theta + k * Math.PI / 2;
// SVG y is flipped
const dx = armR * Math.cos(a), dy = -armR * Math.sin(a);
svg.append("line")
.attr("x1", cxL - dx).attr("y1", cyL - dy).attr("x2", cxL + dx).attr("y2", cyL + dy)
.attr("stroke", k % 2 === 0 ? "steelblue" : "rgba(70,130,180,0.5)")
.attr("stroke-width", 2).attr("stroke-linecap", "round");
}
// Cross in right triangle — angle shifted by kappa + pi/2 * p_ij
// kappa is the angle between the two local reference frames
const kappa = 0.15; // approximate angle between reference edges
const thetaR = theta + kappa + cf_pij * Math.PI / 2;
const cxR = (triR[0][0] + triR[1][0] + triR[2][0]) / 3;
const cyR = (triR[0][1] + triR[1][1] + triR[2][1]) / 3;
for (let k = 0; k < 4; k++) {
const a = thetaR + k * Math.PI / 2;
const dx = armR * Math.cos(a), dy = -armR * Math.sin(a);
svg.append("line")
.attr("x1", cxR - dx).attr("y1", cyR - dy).attr("x2", cxR + dx).attr("y2", cyR + dy)
.attr("stroke", k % 2 === 0 ? "#e67e22" : "rgba(230,126,34,0.5)")
.attr("stroke-width", 2).attr("stroke-linecap", "round");
}
// Angle arc in left triangle
if (theta > 0.02) {
const arcPts = d3.range(0, theta + 0.01, 0.02).map(a => [cxL + 20*Math.cos(a), cyL - 20*Math.sin(a)]);
svg.append("path").attr("d", d3.line()(arcPts))
.attr("fill", "none").attr("stroke", "steelblue").attr("stroke-width", 1.5);
svg.append("text").attr("x", cxL + 30*Math.cos(theta/2)).attr("y", cyL - 30*Math.sin(theta/2))
.attr("font-size", 12).attr("fill", "steelblue").text("θ");
}
// Labels
svg.append("text").attr("x", cxL).attr("y", 100).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "steelblue").text("Triangle i");
svg.append("text").attr("x", cxR).attr("y", 100).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#e67e22").text("Triangle j");
// Period jump label on shared edge
svg.append("text").attr("x", 310).attr("y", 220).attr("font-size", 13)
.attr("fill", "#e74c3c").attr("font-weight", "bold")
.text(`p_ij = ${cf_pij}`);
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 12).attr("fill", "#666")
.text(`Period jump p_ij = ${cf_pij}: cross in j is rotated by ${cf_pij} × 90° relative to i`);
return svg.node();
}
```
### 3.2 Smoothness Energy
The smoothness of the cross field is measured as the sum of squared angle differences between neighboring triangles, accounting for the period jumps:
$$E_{\text{smooth}} = \sum_{e_{ij} \in E} \left( \theta_i + \kappa_{ij} + \frac{\pi}{2} p_{ij} - \theta_j \right)^2 \tag{1}$$
where:
- $\theta_i, \theta_j$ are the cross field angles in triangles $i$ and $j$
- $\kappa_{ij} \in (-\pi, \pi]$ is the rotation angle between the local coordinate frames of $i$ and $j$ (computed by flattening both triangles along their common edge)
- $p_{ij} \in \mathbb{Z}$ is the integer-valued period jump across edge $e_{ij}$
The term $\theta_i + \kappa_{ij} + \frac{\pi}{2} p_{ij}$ represents the angle of triangle $i$'s cross **expressed in triangle $j$'s frame**. The squared difference with $\theta_j$ measures how much the two crosses disagree.
### 3.3 The Cross Field Index
The cross field index of a vertex $v_i$ measures whether a singularity exists there:
$$I(v_i) = I_0(v_i) + \sum_{e_{ij} \in N(v_i)} \frac{p_{ij}}{4}$$
where $I_0(v_i) = \frac{1}{2\pi} \left( A_d(v_i) + \sum_{e_{ij} \in N(v_i)} \kappa_{ij} \right)$ and $A_d(v_i)$ is the angle defect. Only singularities have nonzero index — always a multiple of $\frac{1}{4}$, e.g., $+\frac{1}{4}$ for valence-3 and $-\frac{1}{4}$ for valence-5 in the quadrangulation.
### 3.4 The Mixed-Integer Formulation
Setting the gradient of $E_{\text{smooth}}$ to zero gives a system of linear equations in the unknowns $\theta_i$ (continuous) and $p_{ij}$ (integer):
$$\frac{\partial E_{\text{smooth}}}{\partial \theta_k} = \sum_{e_{kj} \in N(f_i)} 2(\theta_k + \kappa_{kj} + \frac{\pi}{2} p_{kj} - \theta_j) \stackrel{!}{=} 0 \tag{2}$$
$$\frac{\partial E_{\text{smooth}}}{\partial p_{ij}} = \pi(\theta_i + \kappa_{ij} + \frac{\pi}{2} p_{ij} - \theta_j) \stackrel{!}{=} 0 \tag{3}$$
**Reducing the search space**: The solution has a family of equivalent minimizers (rotating any unconstrained face by $\pi/2$ and adjusting period jumps). This is eliminated by constructing a **forest of dual spanning trees** rooted at constrained faces and fixing the period jumps on tree edges to zero. The number of remaining integer variables equals $|F \setminus F_c| \approx |V|$ (approximately one per vertex).
```{ojs}
//| label: fig-dual-spanning-tree
viewof show_tree = Inputs.toggle({label: "Show dual spanning tree", value: true})
```
```{ojs}
//| code-fold: true
{
const w = 500, h = 420;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
// Small mesh: 4x3 grid of quads → 24 triangles
const nx = 4, ny = 3;
const verts = [];
for (let j = 0; j <= ny; j++) {
for (let i = 0; i <= nx; i++) {
verts.push([60 + i * 95, 50 + j * 100]);
}
}
const tris = [];
const triCenters = [];
for (let j = 0; j < ny; j++) {
for (let i = 0; i < nx; i++) {
const v0 = j*(nx+1)+i, v1 = v0+1, v2 = v0+(nx+1), v3 = v2+1;
tris.push([v0, v1, v3]);
tris.push([v0, v3, v2]);
// Centers
triCenters.push([(verts[v0][0]+verts[v1][0]+verts[v3][0])/3,
(verts[v0][1]+verts[v1][1]+verts[v3][1])/3]);
triCenters.push([(verts[v0][0]+verts[v3][0]+verts[v2][0])/3,
(verts[v0][1]+verts[v3][1]+verts[v2][1])/3]);
}
}
// Constrained faces (red) — sparse constraints
const constrained = [0, 10, 20];
// Draw triangles
for (let t = 0; t < tris.length; t++) {
const tri = tris[t];
const pts = tri.map(i => verts[i]);
const isCon = constrained.includes(t);
svg.append("polygon")
.attr("points", pts.map(p => p.join(",")).join(" "))
.attr("fill", isCon ? "rgba(231,76,60,0.12)" : "rgba(200,200,200,0.05)")
.attr("stroke", "#ccc").attr("stroke-width", 1);
}
// Draw constrained face markers
for (const c of constrained) {
svg.append("circle").attr("cx", triCenters[c][0]).attr("cy", triCenters[c][1])
.attr("r", 5).attr("fill", "#e74c3c");
}
// Dual spanning tree edges
if (show_tree) {
// Simple tree: connect adjacent triangles through the dual graph
// Each pair of triangles sharing an edge in the primal mesh is connected in the dual
const dualEdges = [];
for (let j = 0; j < ny; j++) {
for (let i = 0; i < nx; i++) {
const t0 = 2*(j*nx + i), t1 = t0 + 1;
dualEdges.push([t0, t1]); // diagonal pair
if (i + 1 < nx) dualEdges.push([t0, 2*(j*nx + i + 1) + 1]); // right neighbor
if (j + 1 < ny) dualEdges.push([t1, 2*((j+1)*nx + i)]); // bottom neighbor
}
}
// Build spanning tree using BFS from constrained faces
const visited = new Set();
const treeEdges = [];
const queue = [...constrained];
for (const c of constrained) visited.add(c);
while (queue.length > 0) {
const cur = queue.shift();
for (const [a, b] of dualEdges) {
const neighbor = a === cur ? b : b === cur ? a : -1;
if (neighbor >= 0 && !visited.has(neighbor)) {
visited.add(neighbor);
treeEdges.push([cur, neighbor]);
queue.push(neighbor);
}
}
}
// Draw tree edges (green)
for (const [a, b] of treeEdges) {
svg.append("line")
.attr("x1", triCenters[a][0]).attr("y1", triCenters[a][1])
.attr("x2", triCenters[b][0]).attr("y2", triCenters[b][1])
.attr("stroke", "#27ae60").attr("stroke-width", 2.5).attr("opacity", 0.7);
}
// Non-tree dual edges (these keep their p_ij as free integer variables)
const treeSet = new Set(treeEdges.map(e => `${Math.min(e[0],e[1])}-${Math.max(e[0],e[1])}`));
for (const [a, b] of dualEdges) {
const key = `${Math.min(a,b)}-${Math.max(a,b)}`;
if (!treeSet.has(key)) {
svg.append("line")
.attr("x1", triCenters[a][0]).attr("y1", triCenters[a][1])
.attr("x2", triCenters[b][0]).attr("y2", triCenters[b][1])
.attr("stroke", "#999").attr("stroke-width", 1).attr("stroke-dasharray", "3,3").attr("opacity", 0.4);
}
}
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 12).attr("fill", "#666")
.text("Green: tree edges (p_ij = 0). Dashed: free integer variables.");
} else {
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 12).attr("fill", "#666")
.text("Red faces: constrained. Toggle to see the dual spanning tree.");
}
// Legend
svg.append("circle").attr("cx", 20).attr("cy", 15).attr("r", 5).attr("fill", "#e74c3c");
svg.append("text").attr("x", 30).attr("y", 19).attr("font-size", 11).attr("fill", "#666")
.text("Constrained face (root)");
if (show_tree) {
svg.append("line").attr("x1", 200).attr("y1", 15).attr("x2", 220).attr("y2", 15)
.attr("stroke", "#27ae60").attr("stroke-width", 2.5);
svg.append("text").attr("x", 225).attr("y", 19).attr("font-size", 11).attr("fill", "#666")
.text("Tree edge (p_ij fixed to 0)");
}
return svg.node();
}
```
---
## 4. Global Parameterization
> *"We now compute a global parametrization, i.e. a map from the given mesh $\mathcal{M}$ to some disk-shaped parameter domain $\Omega \in \mathbb{R}^2$."*
This is the heart of the MIQ method. Given the optimized cross field from Section 3, we compute two piecewise-linear scalar fields $u$ and $v$ on the mesh whose **gradients follow the cross field directions**. The integer iso-lines of $(u, v)$ then form the edges of the output quad mesh.
### 4.1 The Orientation Energy
The parameterization should be locally oriented according to the cross field. For each triangle $T$ with cross field directions $\mathbf{u}_T$ and $\mathbf{v}_T = \mathbf{u}_T^{\perp}$:
$$E_T = \|h\nabla u - \mathbf{u}_T\|^2 + \|h\nabla v - \mathbf{v}_T\|^2$$
where $h$ is a global scaling parameter controlling the target edge length. The global orientation energy integrates over the entire mesh:
$$E_{\text{orient}} = \int_\mathcal{M} E_T \, dA = \sum_{T \in \mathcal{M}} E_T \cdot \text{area}(T) \tag{4}$$
The minimizer is obtained by solving the sparse linear system $\partial E_{\text{orient}} / \partial x_i = 0$.
```{ojs}
//| label: fig-param-energy
viewof param_h = Inputs.range([0.3, 2.0], {value: 1.0, step: 0.1, label: "Edge length h"})
```
```{ojs}
//| code-fold: true
{
const w = 520, h = 420;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
const ox = 60, oy = 360, sc = 280;
// Draw a patch with iso-lines following a cross field
const nLines = Math.round(8 / param_h);
// Background
svg.append("rect").attr("x", ox).attr("y", oy - sc).attr("width", sc).attr("height", sc)
.attr("fill", "rgba(200,200,200,0.05)").attr("stroke", "#ccc").attr("stroke-width", 1);
// u iso-lines (vertical-ish, following cross field)
for (let i = 0; i <= nLines; i++) {
const t = i / nLines;
// Slight curvature to simulate cross field alignment
const pts = d3.range(0, 1.01, 0.02).map(s => {
const x = ox + t * sc + Math.sin(s * Math.PI) * 10 * (t - 0.5);
const y = oy - s * sc;
return [x, y];
});
svg.append("path").attr("d", d3.line()(pts))
.attr("fill", "none").attr("stroke", "steelblue").attr("stroke-width", 1.2).attr("opacity", 0.7);
}
// v iso-lines (horizontal-ish)
for (let i = 0; i <= nLines; i++) {
const t = i / nLines;
const pts = d3.range(0, 1.01, 0.02).map(s => {
const x = ox + s * sc;
const y = oy - t * sc + Math.sin(s * Math.PI) * 10 * (t - 0.5);
return [x, y];
});
svg.append("path").attr("d", d3.line()(pts))
.attr("fill", "none").attr("stroke", "#e67e22").attr("stroke-width", 1.2).attr("opacity", 0.7);
}
// Cross field direction arrows at a few sample points
const nSample = 4;
for (let j = 1; j < nSample; j++) {
for (let i = 1; i < nSample; i++) {
const px = ox + i / nSample * sc;
const py = oy - j / nSample * sc;
const angle = Math.sin(i/nSample * Math.PI) * 0.15; // slight rotation
const armLen = 12;
// u direction
svg.append("line")
.attr("x1", px - armLen*Math.cos(angle)).attr("y1", py + armLen*Math.sin(angle))
.attr("x2", px + armLen*Math.cos(angle)).attr("y2", py - armLen*Math.sin(angle))
.attr("stroke", "steelblue").attr("stroke-width", 2.5).attr("stroke-linecap", "round");
// v direction (perpendicular)
svg.append("line")
.attr("x1", px - armLen*Math.sin(angle)).attr("y1", py - armLen*Math.cos(angle))
.attr("x2", px + armLen*Math.sin(angle)).attr("y2", py + armLen*Math.cos(angle))
.attr("stroke", "#e67e22").attr("stroke-width", 2.5).attr("stroke-linecap", "round");
}
}
// Labels
svg.append("text").attr("x", ox + sc + 15).attr("y", oy - sc/2)
.attr("font-size", 13).attr("fill", "steelblue").text("u iso-lines");
svg.append("text").attr("x", ox + sc/2).attr("y", oy + 25).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#e67e22").text("v iso-lines");
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#666")
.text(`h = ${param_h.toFixed(1)} → ~${nLines} iso-lines per direction`);
return svg.node();
}
```
### 4.2 Cutting the Mesh
To compute a proper parameterization, the mesh $\mathcal{M}$ must be **cut open** to become topologically a disk. This ensures all singular vertices lie on the boundary of the parameter domain (the angle defect of a singularity cannot be represented by an interior vertex).
The cut graph is computed in two steps:
1. **Dual spanning tree** — start from a random triangle and grow a tree in the dual mesh. The primal of all non-tree edges forms a cut graph that already gives disk topology.
2. **Singularity paths** — add shortest paths (via Dijkstra) from each singularity to the existing cut graph.
The cut graph is then simplified by removing open paths (leaves), yielding a minimal set of cuts.
```{ojs}
//| label: fig-mesh-cutting
viewof cut_step = Inputs.radio(["Original mesh", "Cut to disk", "Unfolded parameter domain"], {value: "Original mesh", label: "Step"})
```
```{ojs}
//| code-fold: true
{
const w = 500, h = 420;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
const cx = w/2, cy = 200;
if (cut_step === "Original mesh") {
// Draw a closed surface (simplified torus-like shape as a polygon mesh)
// Represent as a rectangular mesh with identified edges
const nx = 6, ny = 4;
const baseX = 80, baseY = 80, cellW = 55, cellH = 60;
for (let j = 0; j < ny; j++) {
for (let i = 0; i < nx; i++) {
const x0 = baseX + i*cellW, y0 = baseY + j*cellH;
svg.append("rect").attr("x", x0).attr("y", y0)
.attr("width", cellW).attr("height", cellH)
.attr("fill", "rgba(200,200,200,0.05)").attr("stroke", "#ccc").attr("stroke-width", 1);
}
}
// Singularities
const sings = [[baseX + 2*cellW, baseY + 1*cellH], [baseX + 4*cellW, baseY + 3*cellH]];
for (const s of sings) {
svg.append("circle").attr("cx", s[0]).attr("cy", s[1]).attr("r", 6).attr("fill", "#e74c3c");
}
svg.append("text").attr("x", w/2).attr("y", h - 20).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#666")
.text("Closed surface mesh with 2 singularities (red)");
} else if (cut_step === "Cut to disk") {
const nx = 6, ny = 4;
const baseX = 80, baseY = 80, cellW = 55, cellH = 60;
for (let j = 0; j < ny; j++) {
for (let i = 0; i < nx; i++) {
const x0 = baseX + i*cellW, y0 = baseY + j*cellH;
svg.append("rect").attr("x", x0).attr("y", y0)
.attr("width", cellW).attr("height", cellH)
.attr("fill", "rgba(200,200,200,0.05)").attr("stroke", "#ccc").attr("stroke-width", 1);
}
}
// Cut graph (green lines)
const cuts = [
[[baseX, baseY], [baseX + 2*cellW, baseY]],
[[baseX + 2*cellW, baseY], [baseX + 2*cellW, baseY + cellH]], // to singularity 1
[[baseX + 2*cellW, baseY + cellH], [baseX + 4*cellW, baseY + cellH]],
[[baseX + 4*cellW, baseY + cellH], [baseX + 4*cellW, baseY + 3*cellH]], // to singularity 2
];
for (const [a, b] of cuts) {
svg.append("line").attr("x1", a[0]).attr("y1", a[1]).attr("x2", b[0]).attr("y2", b[1])
.attr("stroke", "#27ae60").attr("stroke-width", 3.5);
}
const sings = [[baseX + 2*cellW, baseY + 1*cellH], [baseX + 4*cellW, baseY + 3*cellH]];
for (const s of sings) {
svg.append("circle").attr("cx", s[0]).attr("cy", s[1]).attr("r", 6).attr("fill", "#e74c3c");
}
svg.append("text").attr("x", w/2).attr("y", h - 20).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#27ae60")
.text("Cut graph (green): singularities connected to boundary");
} else {
// Unfolded disk
const diskCx = w/2, diskCy = 200, diskR = 150;
svg.append("ellipse").attr("cx", diskCx).attr("cy", diskCy)
.attr("rx", diskR * 1.3).attr("ry", diskR)
.attr("fill", "rgba(70,130,180,0.06)").attr("stroke", "#2c3e50").attr("stroke-width", 2);
// Integer grid in parameter domain
for (let i = -3; i <= 3; i++) {
const x = diskCx + i * 40;
svg.append("line").attr("x1", x).attr("y1", diskCy - diskR + 20).attr("x2", x).attr("y2", diskCy + diskR - 20)
.attr("stroke", "steelblue").attr("stroke-width", 0.8).attr("opacity", 0.5);
}
for (let i = -3; i <= 3; i++) {
const y = diskCy + i * 40;
svg.append("line").attr("x1", diskCx - diskR * 1.1).attr("y1", y).attr("x2", diskCx + diskR * 1.1).attr("y2", y)
.attr("stroke", "#e67e22").attr("stroke-width", 0.8).attr("opacity", 0.5);
}
// Singularities on integer positions
const singPos = [[diskCx - 40, diskCy], [diskCx + 80, diskCy + 40]];
for (const s of singPos) {
// Snap to integer grid
const sx = Math.round((s[0] - diskCx) / 40) * 40 + diskCx;
const sy = Math.round((s[1] - diskCy) / 40) * 40 + diskCy;
svg.append("circle").attr("cx", sx).attr("cy", sy).attr("r", 6).attr("fill", "#e74c3c");
}
// Cut seams on boundary (shown as doubled edges)
svg.append("line")
.attr("x1", diskCx - diskR * 0.8).attr("y1", diskCy - diskR * 0.6)
.attr("x2", diskCx - diskR * 0.2).attr("y2", diskCy - diskR * 0.9)
.attr("stroke", "#27ae60").attr("stroke-width", 3).attr("stroke-dasharray", "5,3");
svg.append("text").attr("x", w/2).attr("y", h - 20).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#666")
.text("Unfolded parameter domain with integer grid (singularities on Z^2)");
}
return svg.node();
}
```
### 4.3 Integer Constraints
After cutting the mesh, two types of integer constraints arise:
**Integer singularity locations**: Each singularity must map to an integer $(u, v)$ position. This ensures the cone singularity appears as a quad mesh vertex with the correct valence.
**Cross-boundary compatibility**: At each cut edge $e = \overline{pq}$, the $(u, v)$ values on both sides must be related by a grid automorphism. Introducing two integer variables $j_e, k_e$ per cut edge:
$$(u_p', v_p') = \text{Rot}_{90}^{u_e}(u_p, v_p) + (j_e, k_e)$$
$$(u_q', v_q') = \text{Rot}_{90}^{u_e}(u_q, v_q) + (j_e, k_e)$$
where $u_e$ is the rotation determined by the cross field matching across the cut. In total: **two integer variables and four continuous variables eliminated per cut edge**.
::: {.callout-note}
## The key connection to Bommes 2013
These transition functions $(j_e, k_e)$ with rotation $\text{Rot}_{90}^{u_e}$ are exactly the **integer-grid automorphisms** from Condition 1 of the IGM definition. The MIQ method uses greedy rounding to find them; the 2013 paper uses branch-and-cut MIQP to find them with a guarantee of validity.
:::
### 4.4 The Greedy Parameterization Solver
The full parameterization is a mixed-integer quadratic problem:
- **Continuous variables**: $(u_i, v_i)$ at each vertex $\approx 2|V|$ unknowns
- **Integer variables**: $(j_e, k_e)$ at each cut edge $\approx 2 \times \text{cuts}$, plus integer location constraints for singularities
The greedy solver from Section 2 is applied: compute the continuous relaxation, then iteratively snap singularities to integer locations. After rounding a singularity, the continuous variables are recomputed to minimize $E_{\text{orient}}$ with the new integer constraints.
> *"Applying our mixed-integer greedy solver to this parametrization task can be understood in an intuitive way. After computing an all-continuous solution, which corresponds to the unconstrained parametrization, we iteratively snap the singularities to integer locations."*
---
## 5. Improving the Parameterization
### 5.1 Anisotropic Norm
In practice, exact orientation (following the cross field) is more important than exact edge length. The anisotropic norm trades off these priorities:
$$\|(u, v)\|_{(\alpha, \beta)}^2 = \alpha u^2 + \beta v^2$$
with $\gamma \leq 1$. Setting $\gamma < 1$ allows the parameterization to **stretch** along the iso-line direction (less penalized) while maintaining strict alignment with the cross field (heavily penalized).
$$E_T = \|h\nabla u - \mathbf{u}_T\|_{(\gamma, 1)}^2 + \|h\nabla v - \mathbf{v}_T\|_{(1, \gamma)}^2$$
### 5.2 Feature Line Alignment
Feature edges must map to integer iso-lines to appear as mesh edges in the output. For each feature edge $\overline{pq}$ aligned with the cross field $u$-direction, add the constraint:
$$v_p = v_q \in \mathbb{Z}$$
This ensures the feature edge lies on an integer $v$ iso-line. Each alignment condition adds one integer variable, handled by the greedy solver.
### 5.3 Singularity Relocation
After the initial parameterization, singularities may be poorly placed — too close to each other, to a boundary, or to a feature. A **local search** post-process moves each singularity to a neighboring vertex if it reduces $E_{\text{orient}}$. Moving a singularity along edge $e_{ij}$ changes the period jump $p_{ij}$, requiring only a right-hand-side update to the linear system (the matrix is unchanged, so the Cholesky factorization can be reused).
### 5.4 Local Stiffening
The minimizer of $E_{\text{orient}}$ can still produce **flipped triangles** near singularities — the Jacobian determinant goes negative despite the optimal continuous solution. Local stiffening adds an adapted per-triangle weight $w(T)$ to penalize high distortion:
$$E_{\text{orient}} = \sum_{T \in \mathcal{M}} w(T) \cdot E_T \cdot \text{area}(T)$$
The distortion is characterized by the singular values $\sigma_1, \sigma_2$ of the Jacobian and the orientation $\tau = \text{sign}(\det J_T)$:
$$\lambda = |\tau \frac{\sigma_1}{h} - 1| + |\frac{\sigma_2}{h} - 1|$$
The weight is updated iteratively using a smoothed Laplacian of the distortion:
$$w(T) \leftarrow w(T) + \min\{c \cdot |\Delta\lambda(T)|, d\}$$
with $c = 1$ and maximum update $d = 5$. After smoothing $w(T)$, the parameterization is recomputed. This iterative stiffening process handles most degeneracies, though it **cannot guarantee** that all triangles remain non-degenerate.
Explore how the stiffening weight affects the parameterization near a singularity:
```{ojs}
//| label: fig-stiffening
viewof stiffening_iter = Inputs.range([0, 5], {value: 0, step: 1, label: "Stiffening iterations"})
```
```{ojs}
//| code-fold: true
{
const w = 500, h = 420;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
const cx = w/2, cy = 200;
const gridR = 160;
// Singularity at center
svg.append("circle").attr("cx", cx).attr("cy", cy).attr("r", 7).attr("fill", "#e74c3c");
svg.append("text").attr("x", cx + 12).attr("y", cy - 8)
.attr("font-size", 12).attr("fill", "#e74c3c").text("singularity");
// Draw deformed grid around singularity
// As stiffening increases, grid becomes more regular (less distortion)
const nLines = 8;
const distortion = Math.max(0, 1 - stiffening_iter * 0.18);
for (let i = -nLines/2; i <= nLines/2; i++) {
const t = i / (nLines/2);
// u iso-lines (vertical-ish)
const pts = d3.range(-1, 1.01, 0.05).map(s => {
const angle = Math.atan2(s, t);
const dist = Math.sqrt(t*t + s*s);
// Singularity distortion: lines curve near center
const curvature = distortion * 30 * Math.exp(-dist * 2) * Math.sin(angle * 2);
return [cx + t * gridR + curvature, cy + s * gridR];
});
svg.append("path").attr("d", d3.line()(pts))
.attr("fill", "none").attr("stroke", "steelblue").attr("stroke-width", 1).attr("opacity", 0.6);
// v iso-lines (horizontal-ish)
const pts2 = d3.range(-1, 1.01, 0.05).map(s => {
const angle = Math.atan2(t, s);
const dist = Math.sqrt(t*t + s*s);
const curvature = distortion * 30 * Math.exp(-dist * 2) * Math.sin(angle * 2);
return [cx + s * gridR, cy + t * gridR + curvature];
});
svg.append("path").attr("d", d3.line()(pts2))
.attr("fill", "none").attr("stroke", "#e67e22").attr("stroke-width", 1).attr("opacity", 0.6);
}
// Highlight potentially flipped region
if (stiffening_iter < 2) {
const flipR = 30 * (1 - stiffening_iter * 0.4);
svg.append("circle").attr("cx", cx - 20).attr("cy", cy + 15).attr("r", flipR)
.attr("fill", "rgba(231,76,60,0.15)").attr("stroke", "#e74c3c")
.attr("stroke-width", 1).attr("stroke-dasharray", "3,3");
svg.append("text").attr("x", cx - 20).attr("y", cy + 15 + flipR + 14)
.attr("text-anchor", "middle").attr("font-size", 10).attr("fill", "#e74c3c")
.text("flipped region");
}
const status = stiffening_iter === 0 ? "No stiffening — distortion and flips near singularity" :
stiffening_iter < 3 ? `Iteration ${stiffening_iter}: weight increased, distortion reducing` :
`Iteration ${stiffening_iter}: parameterization well-behaved`;
const statusColor = stiffening_iter < 2 ? "#e74c3c" : "#27ae60";
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", statusColor)
.text(status);
return svg.node();
}
```
::: {.callout-warning}
## The reliability gap
Local stiffening is a heuristic — it increases weights iteratively until the distortion decreases, but it **cannot guarantee** that all triangles will have positive Jacobian determinant. For fine quad meshes this works well, but for **coarse quad layouts** (where the target edge length is comparable to singularity spacing), stiffening often fails. This is the core motivation for the [Bommes et al. 2013](integer_grid_maps.qmd) paper's convex MIQP formulation.
:::
---
## 6. The Full MIQ Pipeline
Putting all the pieces together, the MIQ pipeline is a step-through process:
<div id="miq-info" style="font-family: monospace; font-size: 13px; padding: 8px 12px; background: var(--bs-tertiary-bg, #f8f9fa); border-radius: 4px; margin-bottom: 8px;">
Step 1/6: Input mesh with sparse directional constraints
</div>
<div style="margin-bottom: 8px; display: flex; flex-wrap: wrap; gap: 4px;">
<button id="miq-step1" class="btn btn-sm btn-primary" onclick="window.miqStep('step1')">1. Constraints</button>
<button id="miq-step2" class="btn btn-sm btn-outline-secondary" onclick="window.miqStep('step2')" disabled>2. Cross Field</button>
<button id="miq-step3" class="btn btn-sm btn-outline-secondary" onclick="window.miqStep('step3')" disabled>3. Cut Mesh</button>
<button id="miq-step4" class="btn btn-sm btn-outline-secondary" onclick="window.miqStep('step4')" disabled>4. Parameterize</button>
<button id="miq-step5" class="btn btn-sm btn-outline-secondary" onclick="window.miqStep('step5')" disabled>5. Stiffen</button>
<button id="miq-step6" class="btn btn-sm btn-outline-secondary" onclick="window.miqStep('step6')" disabled>6. Extract Quads</button>
</div>
<canvas id="miq-canvas" style="width: 100%; max-width: 620px; border: 1px solid var(--bs-border-color, #dee2e6); border-radius: 4px;"></canvas>
```{=html}
<script>
(function() {
const canvas = document.getElementById('miq-canvas');
const ctx = canvas.getContext('2d');
const info = document.getElementById('miq-info');
function resizeCanvas() {
const rect = canvas.parentElement.getBoundingClientRect();
const cw = Math.min(rect.width, 620);
canvas.style.width = cw + 'px';
canvas.style.height = (cw * 0.65) + 'px';
canvas.width = cw * devicePixelRatio;
canvas.height = cw * 0.65 * devicePixelRatio;
ctx.setTransform(devicePixelRatio, 0, 0, devicePixelRatio, 0, 0);
}
let currentStep = 0;
const steps = ['step1','step2','step3','step4','step5','step6'];
const labels = [
'Input mesh with sparse directional constraints',
'Smooth cross field via greedy mixed-integer solve',
'Cut mesh to disk topology (singularities on boundary)',
'Global parameterization: minimize E_orient (greedy rounding)',
'Local stiffening: fix flipped triangles iteratively',
'Extract quad mesh from integer iso-lines'
];
const metrics = [
'Constraints: ~15 faces | Curvature-based heuristic',
'Variables: ~30k (θ_i) + ~15k (p_ij integer) | Greedy solver',
'Cuts: dual spanning tree + singularity paths',
'Variables: ~55k (u,v) + ~200 (j_e,k_e integer)',
'Iterations: 3-5 | Weight update w(T) += min{c·|Δλ|, d}',
'Contour integer iso-lines → quad faces + vertices'
];
// Simple mesh for illustration
const N = 7;
const verts = [];
for (let j = 0; j <= N; j++) {
for (let i = 0; i <= N; i++) {
verts.push([i / N, j / N]);
}
}
const singVerts = [24, 40]; // two singularities
function draw() {
const cw = canvas.width / devicePixelRatio;
const ch = canvas.height / devicePixelRatio;
const bg = getComputedStyle(document.documentElement).getPropertyValue('--bs-body-bg') || '#fff';
const fg = getComputedStyle(document.documentElement).getPropertyValue('--bs-body-color') || '#333';
ctx.fillStyle = bg.trim() || '#fff';
ctx.fillRect(0, 0, cw, ch);
const m = 45;
const dw = cw - 2*m, dh = ch - 2*m - 15;
function toS(u, v) { return [m + u*dw, ch - m - v*dh]; }
// Background grid
ctx.strokeStyle = '#eee';
ctx.lineWidth = 0.5;
for (let i = 0; i <= N; i++) {
const f = i/N;
const [x1,y1] = toS(f, 0), [x2,y2] = toS(f, 1);
ctx.beginPath(); ctx.moveTo(x1,y1); ctx.lineTo(x2,y2); ctx.stroke();
const [x3,y3] = toS(0, f), [x4,y4] = toS(1, f);
ctx.beginPath(); ctx.moveTo(x3,y3); ctx.lineTo(x4,y4); ctx.stroke();
}
// Step-dependent rendering
if (currentStep === 0) {
// Constraints: show a few directional arrows
const conFaces = [[1,1],[3,2],[5,4],[2,5],[6,1]];
ctx.strokeStyle = '#ccc';
ctx.lineWidth = 0.5;
// Triangle mesh
for (let j = 0; j < N; j++) {
for (let i = 0; i < N; i++) {
const [x0,y0] = toS(i/N, j/N);
const [x1,y1] = toS((i+1)/N, j/N);
const [x2,y2] = toS(i/N, (j+1)/N);
const [x3,y3] = toS((i+1)/N, (j+1)/N);
ctx.beginPath(); ctx.moveTo(x0,y0); ctx.lineTo(x3,y3); ctx.stroke();
}
}
// Constraints
for (const [ci,cj] of conFaces) {
const cx = (ci+0.33)/N, cy = (cj+0.33)/N;
const [sx,sy] = toS(cx, cy);
ctx.strokeStyle = '#e74c3c';
ctx.lineWidth = 2.5;
const angle = Math.random() * Math.PI * 0.3;
const al = 15;
ctx.beginPath();
ctx.moveTo(sx-al*Math.cos(angle), sy+al*Math.sin(angle));
ctx.lineTo(sx+al*Math.cos(angle), sy-al*Math.sin(angle));
ctx.stroke();
ctx.beginPath();
ctx.moveTo(sx-al*Math.sin(angle), sy-al*Math.cos(angle));
ctx.lineTo(sx+al*Math.sin(angle), sy+al*Math.cos(angle));
ctx.stroke();
}
} else if (currentStep === 1) {
// Cross field on every face
for (let j = 0; j < N; j++) {
for (let i = 0; i < N; i++) {
const cx = (i+0.5)/N, cy = (j+0.5)/N;
const [sx,sy] = toS(cx, cy);
const angle = Math.sin(cx*3)*0.2 + Math.cos(cy*2)*0.15;
const al = Math.min(dw, dh) / N * 0.3;
ctx.strokeStyle = 'steelblue';
ctx.lineWidth = 1.2;
for (let k = 0; k < 2; k++) {
const a = angle + k*Math.PI/2;
ctx.beginPath();
ctx.moveTo(sx-al*Math.cos(a), sy+al*Math.sin(a));
ctx.lineTo(sx+al*Math.cos(a), sy-al*Math.sin(a));
ctx.stroke();
}
}
}
// Singularities
for (const si of singVerts) {
const r = Math.floor(si/(N+1)), c = si%(N+1);
const [sx,sy] = toS(c/N, r/N);
ctx.beginPath(); ctx.arc(sx,sy,5,0,2*Math.PI);
ctx.fillStyle = '#e74c3c'; ctx.fill();
}
} else if (currentStep === 2) {
// Show cut graph
ctx.strokeStyle = '#ccc';
ctx.lineWidth = 0.5;
for (let j = 0; j < N; j++) {
for (let i = 0; i < N; i++) {
const [x0,y0] = toS(i/N, j/N);
const [x1,y1] = toS((i+1)/N, j/N);
const [x2,y2] = toS(i/N, (j+1)/N);
const [x3,y3] = toS((i+1)/N, (j+1)/N);
ctx.beginPath(); ctx.moveTo(x0,y0); ctx.lineTo(x3,y3); ctx.stroke();
}
}
// Cut graph as green lines
ctx.strokeStyle = '#27ae60';
ctx.lineWidth = 3;
// Vertical cut from top to singularity 1
const s1r = Math.floor(singVerts[0]/(N+1)), s1c = singVerts[0]%(N+1);
const [sx1,sy1] = toS(s1c/N, s1r/N);
const [tx1,ty1] = toS(s1c/N, 1);
ctx.beginPath(); ctx.moveTo(sx1,sy1); ctx.lineTo(tx1,ty1); ctx.stroke();
// Horizontal cut between singularities
const s2r = Math.floor(singVerts[1]/(N+1)), s2c = singVerts[1]%(N+1);
const [sx2,sy2] = toS(s2c/N, s2r/N);
ctx.beginPath(); ctx.moveTo(sx1,sy1); ctx.lineTo(sx2,sy2); ctx.stroke();
// Singularities
for (const si of singVerts) {
const r = Math.floor(si/(N+1)), c = si%(N+1);
const [sx,sy] = toS(c/N, r/N);
ctx.beginPath(); ctx.arc(sx,sy,5,0,2*Math.PI);
ctx.fillStyle = '#e74c3c'; ctx.fill();
}
} else if (currentStep >= 3 && currentStep <= 4) {
// Parameterization: iso-lines
const nIso = 8;
const distortion = currentStep === 3 ? 0.4 : 0.1;
ctx.lineWidth = 1;
for (let i = 0; i <= nIso; i++) {
const t = i/nIso;
// u iso-lines
ctx.strokeStyle = 'steelblue';
ctx.beginPath();
for (let s = 0; s <= 1; s += 0.02) {
const x = t + Math.sin(s*Math.PI*2)*distortion*0.05;
const y = s;
const [sx,sy] = toS(Math.max(0,Math.min(1,x)), y);
s === 0 ? ctx.moveTo(sx,sy) : ctx.lineTo(sx,sy);
}
ctx.stroke();
// v iso-lines
ctx.strokeStyle = '#e67e22';
ctx.beginPath();
for (let s = 0; s <= 1; s += 0.02) {
const x = s;
const y = t + Math.sin(s*Math.PI*2)*distortion*0.05;
const [sx,sy] = toS(x, Math.max(0,Math.min(1,y)));
s === 0 ? ctx.moveTo(sx,sy) : ctx.lineTo(sx,sy);
}
ctx.stroke();
}
// Singularities
for (const si of singVerts) {
const r = Math.floor(si/(N+1)), c = si%(N+1);
const [sx,sy] = toS(c/N, r/N);
ctx.beginPath(); ctx.arc(sx,sy,5,0,2*Math.PI);
ctx.fillStyle = '#e74c3c'; ctx.fill();
}
} else {
// Final quad mesh
const nQ = 8;
ctx.strokeStyle = 'steelblue';
ctx.lineWidth = 1.5;
for (let i = 0; i <= nQ; i++) {
const f = i/nQ;
const [x1,y1] = toS(f, 0), [x2,y2] = toS(f, 1);
ctx.beginPath(); ctx.moveTo(x1,y1); ctx.lineTo(x2,y2); ctx.stroke();
const [x3,y3] = toS(0, f), [x4,y4] = toS(1, f);
ctx.beginPath(); ctx.moveTo(x3,y3); ctx.lineTo(x4,y4); ctx.stroke();
}
// Quad vertices
for (let j = 0; j <= nQ; j++) {
for (let i = 0; i <= nQ; i++) {
const [sx,sy] = toS(i/nQ, j/nQ);
ctx.beginPath(); ctx.arc(sx,sy,2,0,2*Math.PI);
ctx.fillStyle = '#2c3e50'; ctx.fill();
}
}
// Singularities as irregular vertices
for (const si of singVerts) {
const r = Math.floor(si/(N+1)), c = si%(N+1);
const [sx,sy] = toS(c/N, r/N);
ctx.beginPath(); ctx.arc(sx,sy,5,0,2*Math.PI);
ctx.fillStyle = '#e74c3c'; ctx.fill();
}
}
// Metrics
ctx.fillStyle = fg.trim() || '#333';
ctx.font = '11px monospace';
ctx.textAlign = 'left';
ctx.fillText(metrics[currentStep], 8, 14);
}
const btns = steps.map(s => document.getElementById('miq-' + s));
window.miqStep = function(step) {
const idx = steps.indexOf(step);
if (idx < 0) return;
currentStep = idx;
info.textContent = `Step ${idx+1}/${steps.length}: ${labels[idx]}`;
for (let i = 0; i <= idx; i++) {
btns[i].classList.remove('btn-outline-secondary');
btns[i].classList.add('btn-primary');
btns[i].disabled = false;
}
if (idx+1 < steps.length) btns[idx+1].disabled = false;
draw();
};
resizeCanvas();
window.addEventListener('resize', () => { resizeCanvas(); draw(); });
draw();
})();
</script>
```
---
## 7. Results and Comparison
### 7.1 Solver Statistics
The greedy solver is remarkably efficient due to local Gauss-Seidel updates:
| Model | Cross Field | | Parameterization | | Quad Mesh |
|:---|:---:|:---:|:---:|:---:|:---:|
| | Dim | Time | Dim | Time | #Sing |
| FANDISK | 17820 | 2.2s | 13776 | 2.4s | 30 |
| ROCKERARM | 32843 | 7.6s | 41552 | 25.5s | 36 |
| FERTILITY | 29342 | 6.6s | 27954 | 25.7s | 48 |
| BOTIJO | 25821 | 5.4s | 29994 | 44.0s | 74 |
| BEETLE | 46425 | 13.3s | 34705 | 10.3s | 6 |
The cross field computation is faster despite having more integer variables because rounding a period jump has only **local** effect (two triangles). Parameterization rounding has **global** effect (shifting a singularity affects the entire solution), requiring the expensive Cholesky solver.
### 7.2 Greedy vs. Direct Rounding
The greedy solver consistently outperforms direct rounding:
- **Fewer singularities** — direct rounding creates unnecessary singularities from rounding errors
- **Lower smoothness energy** — the iterative recomputation after each rounding propagates corrections
- **Better singularity placement** — singularities emerge at geometrically meaningful locations
### 7.3 Limitations
> *"One limitation of our method is that for coarse quadrangulations of highly complex models with many cross field singularities, the local singularity relocation in the parametrization step is dominating the overall computation time."*
The MIQ method has two fundamental limitations that motivated subsequent work:
1. **No orientation guarantee** — stiffening is heuristic; flipped triangles can persist, especially at coarse scales
2. **Greedy is suboptimal** — rounding one variable at a time can miss globally better solutions
---
## 8. Comparison: MIQ (2009) vs. Integer-Grid Maps (2013)
Both papers address the same problem — global parameterization for quad meshing — but take fundamentally different approaches to the integer constraints:
| Aspect | MIQ (Bommes et al. 2009) | IGM (Bommes et al. 2013) |
|:---|:---|:---|
| **Solver** | Greedy rounding (one variable at a time) | Branch-and-cut MIQP (CPLEX) |
| **Orientation guarantee** | None — heuristic stiffening | Convex trisector constraints (guaranteed) |
| **Complexity** | Full mesh ($O(\|V\|)$ variables) | Decimated mesh ($O(\|S\|)$ variables) |
| **Singularity handling** | Local relocation post-process | Geodesic separation constraints |
| **Fine meshes** | Works well (stiffening usually succeeds) | Overkill — greedy is sufficient |
| **Coarse layouts** | Often fails (degeneracies) | Reliable (guaranteed valid IGM) |
| **Runtime** | Seconds (fast greedy + local updates) | Seconds to minutes (decimation + MIQP) |
| **Quality metric** | $E_T = \|h\nabla u - \mathbf{u}_T\|^2 + \|h\nabla v - \mathbf{v}_T\|^2$ | $E_\alpha^k = \frac{1}{2}\sum A_t \|R_t^T\nabla f_t - H_t\|_\alpha^k$ |
| **Cross-boundary compatibility** | $\text{Rot}_{90}^{u_e}(u,v) + (j_e, k_e)$ | Same: $R_{90}^{r_{ij}}\mathbf{a} + \mathbf{t}_{ij}$ |
| **Integer constraints** | Singularities snapped to $\mathbb{Z}^2$ | Same, plus separator constraints |
::: {.callout-tip}
## The progression of ideas
MIQ (2009) established the **formulation** — cross field smoothness + parameterization orientation as mixed-integer quadratic programs, solved by greedy rounding. IGM (2013) kept the formulation but replaced the **solver** with a global optimizer (branch-and-cut) and added the mathematical machinery (convex orientation constraints, complexity reduction, singularity separation) to make it practical. The transition functions, energy metrics, and integer constraint structure are fundamentally the same; the difference is in **how** the integer solution is found and **what guarantees** it provides.
:::
---
## 9. Connection to Our Work
The MIQ parameterization is the direct foundation for our MetricFrameField pipeline's parameterization stage. The key correspondence:
| MIQ Concept | MetricFrameField Implementation |
|:---|:---|
| Cross field angles $\theta_i$ | Cross field from metric-customized connection |
| Period jumps $p_{ij}$ | Matching numbers across edges |
| Smoothness energy $E_{\text{smooth}}$ | Connection-based smoothness (angle defect) |
| Orientation energy $E_{\text{orient}}$ | Parameterization energy with target metric $H_t$ |
| Greedy mixed-integer solver | Greedy rounding of period jumps and translations |
| Cut graph construction | Dual spanning tree + singularity paths |
| Local stiffening | Not yet implemented (potential improvement) |
The main difference: MIQ uses a fixed Euclidean metric with isotropic/anisotropic sizing ($h$ and $\gamma$), while our approach computes a **custom Riemannian metric** via optimization (Jiang et al. 2015) that simultaneously encodes the desired sizing, orientation, and singularity structure. The parameterization stage then uses the MIQ-style greedy solver to find the integer constraints.
Global Parameterization via Greedy Rounding
Mixed-Integer Quadrangulation — Bommes, Zimmer, Kobbelt (2009)
ACM Transactions on Graphics (SIGGRAPH), Vol. 28, No. 3, Article 77.
This paper presents a complete pipeline for automatic quad meshing: starting from sparse directional constraints, it computes a smooth cross field, then a globally smooth parameterization whose integer iso-lines follow the cross field. Both stages are formulated as mixed-integer problems solved by an adaptive greedy solver — rounding integer variables one at a time with local Gauss-Seidel updates. This page focuses on the global parameterization (Section 5), which became the foundational “MIQ” method that later work (including Bommes et al. 2013) sought to improve.
The paper decomposes quad meshing into two stages, each a mixed-integer problem:
“In both steps of the algorithm the task can be formulated in terms of a mixed-integer problem. These are linear problems where a subset of the variables is continuous (\(\in \mathbb{R}\)) and the others are discrete (\(\in \mathbb{Z}\)).”
The paper identifies five key properties of a good quadrangulation:
Before diving into the parameterization, we need to understand the solver that drives both stages. The key insight: instead of solving the full mixed-integer problem at once (NP-hard), round integer variables one at a time in order of least rounding error.
Direct rounding: Solve the continuous relaxation (ignore all integer constraints), then round every integer variable to the nearest integer simultaneously. This is fast but can produce large errors due to mutual dependencies between variables.
Greedy rounding: Round one variable at a time — the one with the smallest rounding error — and recompute the continuous solution after each rounding step. This propagates the effect of each rounding decision before making the next one.
{
const w = 600, h = 350;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
const isDirect = rounding_mode === "Direct rounding";
// Show a 1D example: 5 variables, some integer-constrained
const nVars = 6;
const contSolution = [0.3, 1.7, 2.1, 3.6, 4.2, 5.8];
const isInteger = [false, true, false, true, false, true]; // vars 1,3,5 are integer
// Direct rounding: round all integers at once
const directRounded = contSolution.map((v, i) => isInteger[i] ? Math.round(v) : v);
// Greedy rounding: round one by one, adjusting neighbors
// Simulate: round var 1 first (1.7 → 2), shift neighbors slightly
const greedyRounded = [...contSolution];
// Round var 5 first (closest to integer: 5.8 → 6, error 0.2)
greedyRounded[5] = 6; greedyRounded[4] = 4.3;
// Round var 1 next (1.7 → 2, error 0.3)
greedyRounded[1] = 2; greedyRounded[0] = 0.4; greedyRounded[2] = 2.0;
// Round var 3 (3.6 → 4, error 0.4)
greedyRounded[3] = 4; greedyRounded[2] = 2.1; greedyRounded[4] = 4.2;
const values = isDirect ? directRounded : greedyRounded;
const margin = {left: 50, right: 30, top: 50, bottom: 60};
const plotW = w - margin.left - margin.right;
const plotH = h - margin.top - margin.bottom;
const g = svg.append("g").attr("transform", `translate(${margin.left},${margin.top})`);
// X axis
const xScale = d3.scaleLinear().domain([0, nVars - 1]).range([0, plotW]);
const yScale = d3.scaleLinear().domain([0, 7]).range([plotH, 0]);
// Integer grid lines
for (let i = 0; i <= 7; i++) {
g.append("line")
.attr("x1", 0).attr("y1", yScale(i)).attr("x2", plotW).attr("y2", yScale(i))
.attr("stroke", "#eee").attr("stroke-width", 1);
g.append("text").attr("x", -10).attr("y", yScale(i) + 4)
.attr("text-anchor", "end").attr("font-size", 11).attr("fill", "#aaa").text(i);
}
// Axis labels
g.append("text").attr("x", plotW / 2).attr("y", plotH + 40)
.attr("text-anchor", "middle").attr("font-size", 12).attr("fill", "#666").text("Variable index");
g.append("text").attr("x", -35).attr("y", plotH / 2)
.attr("text-anchor", "middle").attr("font-size", 12).attr("fill", "#666")
.attr("transform", `rotate(-90, -35, ${plotH/2})`).text("Value");
// Continuous solution (dotted line)
const contLine = d3.line().x((d, i) => xScale(i)).y(d => yScale(d));
g.append("path").attr("d", contLine(contSolution))
.attr("fill", "none").attr("stroke", "#999").attr("stroke-width", 1.5).attr("stroke-dasharray", "4,3");
// Rounded solution (solid line)
g.append("path").attr("d", contLine(values))
.attr("fill", "none").attr("stroke", "steelblue").attr("stroke-width", 2);
// Draw points
for (let i = 0; i < nVars; i++) {
// Continuous value
g.append("circle").attr("cx", xScale(i)).attr("cy", yScale(contSolution[i])).attr("r", 4)
.attr("fill", "none").attr("stroke", "#999").attr("stroke-width", 1.5);
// Rounded value
const color = isInteger[i] ? "#e74c3c" : "steelblue";
g.append("circle").attr("cx", xScale(i)).attr("cy", yScale(values[i])).attr("r", 5)
.attr("fill", color);
// Arrow from continuous to rounded if different
if (Math.abs(values[i] - contSolution[i]) > 0.01) {
g.append("line")
.attr("x1", xScale(i) + 8).attr("y1", yScale(contSolution[i]))
.attr("x2", xScale(i) + 8).attr("y2", yScale(values[i]))
.attr("stroke", color).attr("stroke-width", 1.5).attr("opacity", 0.5);
}
// Variable label
g.append("text").attr("x", xScale(i)).attr("y", plotH + 18)
.attr("text-anchor", "middle").attr("font-size", 11)
.attr("fill", isInteger[i] ? "#e74c3c" : "#666")
.text(isInteger[i] ? `x${i} (int)` : `x${i}`);
}
// Compute total error
let totalError = 0;
for (let i = 0; i < nVars; i++) {
if (isInteger[i]) totalError += Math.abs(values[i] - contSolution[i]);
}
// Legend
svg.append("circle").attr("cx", margin.left + 10).attr("cy", 15).attr("r", 4)
.attr("fill", "none").attr("stroke", "#999").attr("stroke-width", 1.5);
svg.append("text").attr("x", margin.left + 20).attr("y", 19)
.attr("font-size", 11).attr("fill", "#666").text("Continuous solution");
svg.append("circle").attr("cx", margin.left + 160).attr("cy", 15).attr("r", 5).attr("fill", "steelblue");
svg.append("text").attr("x", margin.left + 170).attr("y", 19)
.attr("font-size", 11).attr("fill", "#666").text("After rounding");
svg.append("circle").attr("cx", margin.left + 290).attr("cy", 15).attr("r", 5).attr("fill", "#e74c3c");
svg.append("text").attr("x", margin.left + 300).attr("y", 19)
.attr("font-size", 11).attr("fill", "#666").text("Integer variable");
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", isDirect ? "#e74c3c" : "#27ae60")
.text(isDirect ?
"Direct: round all integers at once (no recomputation between roundings)" :
"Greedy: round one at a time, recompute continuous variables after each"
);
return svg.node();
}The greedy solver is accelerated by local Gauss-Seidel updates: after rounding variable \(x_i\), only variables in its dependency neighborhood need updating. The residuum at each variable is:
\[r_k = b_k - \sum_{j=1}^{n} A_{kj} x_j\]
If \(|r_k|\) exceeds a tolerance (\(10^{-6}\)), update \(x_k \leftarrow x_k - r_k / A_{kk}\) and push its neighbors onto the queue. This exploits the sparsity of the system — rounding a period jump on one edge only affects a local neighborhood.
For the cross field, rounding a period jump \(p_{ij}\) on one edge only affects the angles of the two adjacent triangles and their neighbors. The local Gauss-Seidel update propagates this change through the mesh without re-solving the entire system. This is what makes rounding tens of thousands of variables tractable.
A cross field on a triangle mesh \(M = (V, E, F)\) is defined by:
The four cross directions in triangle \(T\) are \(\mathbf{d}_k = (\cos(\theta + k\pi/2), \sin(\theta + k\pi/2))\) for \(k = 0, 1, 2, 3\), measured relative to the first edge \(\mathbf{e}\) of the triangle.
{
const w = 600, h = 380;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
// Two adjacent triangles
const triL = [[100, 300], [300, 300], [200, 130]];
const triR = [[300, 300], [500, 300], [400, 130]];
// Draw triangles
svg.append("polygon")
.attr("points", triL.map(v => v.join(",")).join(" "))
.attr("fill", "rgba(70,130,180,0.08)").attr("stroke", "#2c3e50").attr("stroke-width", 2);
svg.append("polygon")
.attr("points", triR.map(v => v.join(",")).join(" "))
.attr("fill", "rgba(230,126,34,0.08)").attr("stroke", "#2c3e50").attr("stroke-width", 2);
// Shared edge highlight
svg.append("line")
.attr("x1", 300).attr("y1", 300).attr("x2", triL[2][0] + (triR[2][0] - triL[2][0]) * 0)
.attr("y2", triL[2][1])
.attr("stroke", "#e74c3c").attr("stroke-width", 3);
// Reference edges (first edge of each triangle)
svg.append("line")
.attr("x1", triL[0][0]).attr("y1", triL[0][1]).attr("x2", triL[1][0]).attr("y2", triL[1][1])
.attr("stroke", "#ccc").attr("stroke-width", 1).attr("stroke-dasharray", "3,3");
svg.append("text").attr("x", 200).attr("y", 318).attr("text-anchor", "middle")
.attr("font-size", 10).attr("fill", "#aaa").text("ref edge e");
// Cross in left triangle
const cxL = (triL[0][0] + triL[1][0] + triL[2][0]) / 3;
const cyL = (triL[0][1] + triL[1][1] + triL[2][1]) / 3;
const theta = cf_theta * Math.PI / 180;
const armR = 35;
for (let k = 0; k < 4; k++) {
const a = theta + k * Math.PI / 2;
// SVG y is flipped
const dx = armR * Math.cos(a), dy = -armR * Math.sin(a);
svg.append("line")
.attr("x1", cxL - dx).attr("y1", cyL - dy).attr("x2", cxL + dx).attr("y2", cyL + dy)
.attr("stroke", k % 2 === 0 ? "steelblue" : "rgba(70,130,180,0.5)")
.attr("stroke-width", 2).attr("stroke-linecap", "round");
}
// Cross in right triangle — angle shifted by kappa + pi/2 * p_ij
// kappa is the angle between the two local reference frames
const kappa = 0.15; // approximate angle between reference edges
const thetaR = theta + kappa + cf_pij * Math.PI / 2;
const cxR = (triR[0][0] + triR[1][0] + triR[2][0]) / 3;
const cyR = (triR[0][1] + triR[1][1] + triR[2][1]) / 3;
for (let k = 0; k < 4; k++) {
const a = thetaR + k * Math.PI / 2;
const dx = armR * Math.cos(a), dy = -armR * Math.sin(a);
svg.append("line")
.attr("x1", cxR - dx).attr("y1", cyR - dy).attr("x2", cxR + dx).attr("y2", cyR + dy)
.attr("stroke", k % 2 === 0 ? "#e67e22" : "rgba(230,126,34,0.5)")
.attr("stroke-width", 2).attr("stroke-linecap", "round");
}
// Angle arc in left triangle
if (theta > 0.02) {
const arcPts = d3.range(0, theta + 0.01, 0.02).map(a => [cxL + 20*Math.cos(a), cyL - 20*Math.sin(a)]);
svg.append("path").attr("d", d3.line()(arcPts))
.attr("fill", "none").attr("stroke", "steelblue").attr("stroke-width", 1.5);
svg.append("text").attr("x", cxL + 30*Math.cos(theta/2)).attr("y", cyL - 30*Math.sin(theta/2))
.attr("font-size", 12).attr("fill", "steelblue").text("θ");
}
// Labels
svg.append("text").attr("x", cxL).attr("y", 100).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "steelblue").text("Triangle i");
svg.append("text").attr("x", cxR).attr("y", 100).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#e67e22").text("Triangle j");
// Period jump label on shared edge
svg.append("text").attr("x", 310).attr("y", 220).attr("font-size", 13)
.attr("fill", "#e74c3c").attr("font-weight", "bold")
.text(`p_ij = ${cf_pij}`);
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 12).attr("fill", "#666")
.text(`Period jump p_ij = ${cf_pij}: cross in j is rotated by ${cf_pij} × 90° relative to i`);
return svg.node();
}The smoothness of the cross field is measured as the sum of squared angle differences between neighboring triangles, accounting for the period jumps:
\[E_{\text{smooth}} = \sum_{e_{ij} \in E} \left( \theta_i + \kappa_{ij} + \frac{\pi}{2} p_{ij} - \theta_j \right)^2 \tag{1}\]
where:
The term \(\theta_i + \kappa_{ij} + \frac{\pi}{2} p_{ij}\) represents the angle of triangle \(i\)’s cross expressed in triangle \(j\)’s frame. The squared difference with \(\theta_j\) measures how much the two crosses disagree.
The cross field index of a vertex \(v_i\) measures whether a singularity exists there:
\[I(v_i) = I_0(v_i) + \sum_{e_{ij} \in N(v_i)} \frac{p_{ij}}{4}\]
where \(I_0(v_i) = \frac{1}{2\pi} \left( A_d(v_i) + \sum_{e_{ij} \in N(v_i)} \kappa_{ij} \right)\) and \(A_d(v_i)\) is the angle defect. Only singularities have nonzero index — always a multiple of \(\frac{1}{4}\), e.g., \(+\frac{1}{4}\) for valence-3 and \(-\frac{1}{4}\) for valence-5 in the quadrangulation.
Setting the gradient of \(E_{\text{smooth}}\) to zero gives a system of linear equations in the unknowns \(\theta_i\) (continuous) and \(p_{ij}\) (integer):
\[\frac{\partial E_{\text{smooth}}}{\partial \theta_k} = \sum_{e_{kj} \in N(f_i)} 2(\theta_k + \kappa_{kj} + \frac{\pi}{2} p_{kj} - \theta_j) \stackrel{!}{=} 0 \tag{2}\]
\[\frac{\partial E_{\text{smooth}}}{\partial p_{ij}} = \pi(\theta_i + \kappa_{ij} + \frac{\pi}{2} p_{ij} - \theta_j) \stackrel{!}{=} 0 \tag{3}\]
Reducing the search space: The solution has a family of equivalent minimizers (rotating any unconstrained face by \(\pi/2\) and adjusting period jumps). This is eliminated by constructing a forest of dual spanning trees rooted at constrained faces and fixing the period jumps on tree edges to zero. The number of remaining integer variables equals \(|F \setminus F_c| \approx |V|\) (approximately one per vertex).
{
const w = 500, h = 420;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
// Small mesh: 4x3 grid of quads → 24 triangles
const nx = 4, ny = 3;
const verts = [];
for (let j = 0; j <= ny; j++) {
for (let i = 0; i <= nx; i++) {
verts.push([60 + i * 95, 50 + j * 100]);
}
}
const tris = [];
const triCenters = [];
for (let j = 0; j < ny; j++) {
for (let i = 0; i < nx; i++) {
const v0 = j*(nx+1)+i, v1 = v0+1, v2 = v0+(nx+1), v3 = v2+1;
tris.push([v0, v1, v3]);
tris.push([v0, v3, v2]);
// Centers
triCenters.push([(verts[v0][0]+verts[v1][0]+verts[v3][0])/3,
(verts[v0][1]+verts[v1][1]+verts[v3][1])/3]);
triCenters.push([(verts[v0][0]+verts[v3][0]+verts[v2][0])/3,
(verts[v0][1]+verts[v3][1]+verts[v2][1])/3]);
}
}
// Constrained faces (red) — sparse constraints
const constrained = [0, 10, 20];
// Draw triangles
for (let t = 0; t < tris.length; t++) {
const tri = tris[t];
const pts = tri.map(i => verts[i]);
const isCon = constrained.includes(t);
svg.append("polygon")
.attr("points", pts.map(p => p.join(",")).join(" "))
.attr("fill", isCon ? "rgba(231,76,60,0.12)" : "rgba(200,200,200,0.05)")
.attr("stroke", "#ccc").attr("stroke-width", 1);
}
// Draw constrained face markers
for (const c of constrained) {
svg.append("circle").attr("cx", triCenters[c][0]).attr("cy", triCenters[c][1])
.attr("r", 5).attr("fill", "#e74c3c");
}
// Dual spanning tree edges
if (show_tree) {
// Simple tree: connect adjacent triangles through the dual graph
// Each pair of triangles sharing an edge in the primal mesh is connected in the dual
const dualEdges = [];
for (let j = 0; j < ny; j++) {
for (let i = 0; i < nx; i++) {
const t0 = 2*(j*nx + i), t1 = t0 + 1;
dualEdges.push([t0, t1]); // diagonal pair
if (i + 1 < nx) dualEdges.push([t0, 2*(j*nx + i + 1) + 1]); // right neighbor
if (j + 1 < ny) dualEdges.push([t1, 2*((j+1)*nx + i)]); // bottom neighbor
}
}
// Build spanning tree using BFS from constrained faces
const visited = new Set();
const treeEdges = [];
const queue = [...constrained];
for (const c of constrained) visited.add(c);
while (queue.length > 0) {
const cur = queue.shift();
for (const [a, b] of dualEdges) {
const neighbor = a === cur ? b : b === cur ? a : -1;
if (neighbor >= 0 && !visited.has(neighbor)) {
visited.add(neighbor);
treeEdges.push([cur, neighbor]);
queue.push(neighbor);
}
}
}
// Draw tree edges (green)
for (const [a, b] of treeEdges) {
svg.append("line")
.attr("x1", triCenters[a][0]).attr("y1", triCenters[a][1])
.attr("x2", triCenters[b][0]).attr("y2", triCenters[b][1])
.attr("stroke", "#27ae60").attr("stroke-width", 2.5).attr("opacity", 0.7);
}
// Non-tree dual edges (these keep their p_ij as free integer variables)
const treeSet = new Set(treeEdges.map(e => `${Math.min(e[0],e[1])}-${Math.max(e[0],e[1])}`));
for (const [a, b] of dualEdges) {
const key = `${Math.min(a,b)}-${Math.max(a,b)}`;
if (!treeSet.has(key)) {
svg.append("line")
.attr("x1", triCenters[a][0]).attr("y1", triCenters[a][1])
.attr("x2", triCenters[b][0]).attr("y2", triCenters[b][1])
.attr("stroke", "#999").attr("stroke-width", 1).attr("stroke-dasharray", "3,3").attr("opacity", 0.4);
}
}
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 12).attr("fill", "#666")
.text("Green: tree edges (p_ij = 0). Dashed: free integer variables.");
} else {
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 12).attr("fill", "#666")
.text("Red faces: constrained. Toggle to see the dual spanning tree.");
}
// Legend
svg.append("circle").attr("cx", 20).attr("cy", 15).attr("r", 5).attr("fill", "#e74c3c");
svg.append("text").attr("x", 30).attr("y", 19).attr("font-size", 11).attr("fill", "#666")
.text("Constrained face (root)");
if (show_tree) {
svg.append("line").attr("x1", 200).attr("y1", 15).attr("x2", 220).attr("y2", 15)
.attr("stroke", "#27ae60").attr("stroke-width", 2.5);
svg.append("text").attr("x", 225).attr("y", 19).attr("font-size", 11).attr("fill", "#666")
.text("Tree edge (p_ij fixed to 0)");
}
return svg.node();
}“We now compute a global parametrization, i.e. a map from the given mesh \(\mathcal{M}\) to some disk-shaped parameter domain \(\Omega \in \mathbb{R}^2\).”
This is the heart of the MIQ method. Given the optimized cross field from Section 3, we compute two piecewise-linear scalar fields \(u\) and \(v\) on the mesh whose gradients follow the cross field directions. The integer iso-lines of \((u, v)\) then form the edges of the output quad mesh.
The parameterization should be locally oriented according to the cross field. For each triangle \(T\) with cross field directions \(\mathbf{u}_T\) and \(\mathbf{v}_T = \mathbf{u}_T^{\perp}\):
\[E_T = \|h\nabla u - \mathbf{u}_T\|^2 + \|h\nabla v - \mathbf{v}_T\|^2\]
where \(h\) is a global scaling parameter controlling the target edge length. The global orientation energy integrates over the entire mesh:
\[E_{\text{orient}} = \int_\mathcal{M} E_T \, dA = \sum_{T \in \mathcal{M}} E_T \cdot \text{area}(T) \tag{4}\]
The minimizer is obtained by solving the sparse linear system \(\partial E_{\text{orient}} / \partial x_i = 0\).
{
const w = 520, h = 420;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
const ox = 60, oy = 360, sc = 280;
// Draw a patch with iso-lines following a cross field
const nLines = Math.round(8 / param_h);
// Background
svg.append("rect").attr("x", ox).attr("y", oy - sc).attr("width", sc).attr("height", sc)
.attr("fill", "rgba(200,200,200,0.05)").attr("stroke", "#ccc").attr("stroke-width", 1);
// u iso-lines (vertical-ish, following cross field)
for (let i = 0; i <= nLines; i++) {
const t = i / nLines;
// Slight curvature to simulate cross field alignment
const pts = d3.range(0, 1.01, 0.02).map(s => {
const x = ox + t * sc + Math.sin(s * Math.PI) * 10 * (t - 0.5);
const y = oy - s * sc;
return [x, y];
});
svg.append("path").attr("d", d3.line()(pts))
.attr("fill", "none").attr("stroke", "steelblue").attr("stroke-width", 1.2).attr("opacity", 0.7);
}
// v iso-lines (horizontal-ish)
for (let i = 0; i <= nLines; i++) {
const t = i / nLines;
const pts = d3.range(0, 1.01, 0.02).map(s => {
const x = ox + s * sc;
const y = oy - t * sc + Math.sin(s * Math.PI) * 10 * (t - 0.5);
return [x, y];
});
svg.append("path").attr("d", d3.line()(pts))
.attr("fill", "none").attr("stroke", "#e67e22").attr("stroke-width", 1.2).attr("opacity", 0.7);
}
// Cross field direction arrows at a few sample points
const nSample = 4;
for (let j = 1; j < nSample; j++) {
for (let i = 1; i < nSample; i++) {
const px = ox + i / nSample * sc;
const py = oy - j / nSample * sc;
const angle = Math.sin(i/nSample * Math.PI) * 0.15; // slight rotation
const armLen = 12;
// u direction
svg.append("line")
.attr("x1", px - armLen*Math.cos(angle)).attr("y1", py + armLen*Math.sin(angle))
.attr("x2", px + armLen*Math.cos(angle)).attr("y2", py - armLen*Math.sin(angle))
.attr("stroke", "steelblue").attr("stroke-width", 2.5).attr("stroke-linecap", "round");
// v direction (perpendicular)
svg.append("line")
.attr("x1", px - armLen*Math.sin(angle)).attr("y1", py - armLen*Math.cos(angle))
.attr("x2", px + armLen*Math.sin(angle)).attr("y2", py + armLen*Math.cos(angle))
.attr("stroke", "#e67e22").attr("stroke-width", 2.5).attr("stroke-linecap", "round");
}
}
// Labels
svg.append("text").attr("x", ox + sc + 15).attr("y", oy - sc/2)
.attr("font-size", 13).attr("fill", "steelblue").text("u iso-lines");
svg.append("text").attr("x", ox + sc/2).attr("y", oy + 25).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#e67e22").text("v iso-lines");
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#666")
.text(`h = ${param_h.toFixed(1)} → ~${nLines} iso-lines per direction`);
return svg.node();
}To compute a proper parameterization, the mesh \(\mathcal{M}\) must be cut open to become topologically a disk. This ensures all singular vertices lie on the boundary of the parameter domain (the angle defect of a singularity cannot be represented by an interior vertex).
The cut graph is computed in two steps:
The cut graph is then simplified by removing open paths (leaves), yielding a minimal set of cuts.
{
const w = 500, h = 420;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
const cx = w/2, cy = 200;
if (cut_step === "Original mesh") {
// Draw a closed surface (simplified torus-like shape as a polygon mesh)
// Represent as a rectangular mesh with identified edges
const nx = 6, ny = 4;
const baseX = 80, baseY = 80, cellW = 55, cellH = 60;
for (let j = 0; j < ny; j++) {
for (let i = 0; i < nx; i++) {
const x0 = baseX + i*cellW, y0 = baseY + j*cellH;
svg.append("rect").attr("x", x0).attr("y", y0)
.attr("width", cellW).attr("height", cellH)
.attr("fill", "rgba(200,200,200,0.05)").attr("stroke", "#ccc").attr("stroke-width", 1);
}
}
// Singularities
const sings = [[baseX + 2*cellW, baseY + 1*cellH], [baseX + 4*cellW, baseY + 3*cellH]];
for (const s of sings) {
svg.append("circle").attr("cx", s[0]).attr("cy", s[1]).attr("r", 6).attr("fill", "#e74c3c");
}
svg.append("text").attr("x", w/2).attr("y", h - 20).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#666")
.text("Closed surface mesh with 2 singularities (red)");
} else if (cut_step === "Cut to disk") {
const nx = 6, ny = 4;
const baseX = 80, baseY = 80, cellW = 55, cellH = 60;
for (let j = 0; j < ny; j++) {
for (let i = 0; i < nx; i++) {
const x0 = baseX + i*cellW, y0 = baseY + j*cellH;
svg.append("rect").attr("x", x0).attr("y", y0)
.attr("width", cellW).attr("height", cellH)
.attr("fill", "rgba(200,200,200,0.05)").attr("stroke", "#ccc").attr("stroke-width", 1);
}
}
// Cut graph (green lines)
const cuts = [
[[baseX, baseY], [baseX + 2*cellW, baseY]],
[[baseX + 2*cellW, baseY], [baseX + 2*cellW, baseY + cellH]], // to singularity 1
[[baseX + 2*cellW, baseY + cellH], [baseX + 4*cellW, baseY + cellH]],
[[baseX + 4*cellW, baseY + cellH], [baseX + 4*cellW, baseY + 3*cellH]], // to singularity 2
];
for (const [a, b] of cuts) {
svg.append("line").attr("x1", a[0]).attr("y1", a[1]).attr("x2", b[0]).attr("y2", b[1])
.attr("stroke", "#27ae60").attr("stroke-width", 3.5);
}
const sings = [[baseX + 2*cellW, baseY + 1*cellH], [baseX + 4*cellW, baseY + 3*cellH]];
for (const s of sings) {
svg.append("circle").attr("cx", s[0]).attr("cy", s[1]).attr("r", 6).attr("fill", "#e74c3c");
}
svg.append("text").attr("x", w/2).attr("y", h - 20).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#27ae60")
.text("Cut graph (green): singularities connected to boundary");
} else {
// Unfolded disk
const diskCx = w/2, diskCy = 200, diskR = 150;
svg.append("ellipse").attr("cx", diskCx).attr("cy", diskCy)
.attr("rx", diskR * 1.3).attr("ry", diskR)
.attr("fill", "rgba(70,130,180,0.06)").attr("stroke", "#2c3e50").attr("stroke-width", 2);
// Integer grid in parameter domain
for (let i = -3; i <= 3; i++) {
const x = diskCx + i * 40;
svg.append("line").attr("x1", x).attr("y1", diskCy - diskR + 20).attr("x2", x).attr("y2", diskCy + diskR - 20)
.attr("stroke", "steelblue").attr("stroke-width", 0.8).attr("opacity", 0.5);
}
for (let i = -3; i <= 3; i++) {
const y = diskCy + i * 40;
svg.append("line").attr("x1", diskCx - diskR * 1.1).attr("y1", y).attr("x2", diskCx + diskR * 1.1).attr("y2", y)
.attr("stroke", "#e67e22").attr("stroke-width", 0.8).attr("opacity", 0.5);
}
// Singularities on integer positions
const singPos = [[diskCx - 40, diskCy], [diskCx + 80, diskCy + 40]];
for (const s of singPos) {
// Snap to integer grid
const sx = Math.round((s[0] - diskCx) / 40) * 40 + diskCx;
const sy = Math.round((s[1] - diskCy) / 40) * 40 + diskCy;
svg.append("circle").attr("cx", sx).attr("cy", sy).attr("r", 6).attr("fill", "#e74c3c");
}
// Cut seams on boundary (shown as doubled edges)
svg.append("line")
.attr("x1", diskCx - diskR * 0.8).attr("y1", diskCy - diskR * 0.6)
.attr("x2", diskCx - diskR * 0.2).attr("y2", diskCy - diskR * 0.9)
.attr("stroke", "#27ae60").attr("stroke-width", 3).attr("stroke-dasharray", "5,3");
svg.append("text").attr("x", w/2).attr("y", h - 20).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", "#666")
.text("Unfolded parameter domain with integer grid (singularities on Z^2)");
}
return svg.node();
}After cutting the mesh, two types of integer constraints arise:
Integer singularity locations: Each singularity must map to an integer \((u, v)\) position. This ensures the cone singularity appears as a quad mesh vertex with the correct valence.
Cross-boundary compatibility: At each cut edge \(e = \overline{pq}\), the \((u, v)\) values on both sides must be related by a grid automorphism. Introducing two integer variables \(j_e, k_e\) per cut edge:
\[(u_p', v_p') = \text{Rot}_{90}^{u_e}(u_p, v_p) + (j_e, k_e)\] \[(u_q', v_q') = \text{Rot}_{90}^{u_e}(u_q, v_q) + (j_e, k_e)\]
where \(u_e\) is the rotation determined by the cross field matching across the cut. In total: two integer variables and four continuous variables eliminated per cut edge.
These transition functions \((j_e, k_e)\) with rotation \(\text{Rot}_{90}^{u_e}\) are exactly the integer-grid automorphisms from Condition 1 of the IGM definition. The MIQ method uses greedy rounding to find them; the 2013 paper uses branch-and-cut MIQP to find them with a guarantee of validity.
The full parameterization is a mixed-integer quadratic problem:
The greedy solver from Section 2 is applied: compute the continuous relaxation, then iteratively snap singularities to integer locations. After rounding a singularity, the continuous variables are recomputed to minimize \(E_{\text{orient}}\) with the new integer constraints.
“Applying our mixed-integer greedy solver to this parametrization task can be understood in an intuitive way. After computing an all-continuous solution, which corresponds to the unconstrained parametrization, we iteratively snap the singularities to integer locations.”
In practice, exact orientation (following the cross field) is more important than exact edge length. The anisotropic norm trades off these priorities:
\[\|(u, v)\|_{(\alpha, \beta)}^2 = \alpha u^2 + \beta v^2\]
with \(\gamma \leq 1\). Setting \(\gamma < 1\) allows the parameterization to stretch along the iso-line direction (less penalized) while maintaining strict alignment with the cross field (heavily penalized).
\[E_T = \|h\nabla u - \mathbf{u}_T\|_{(\gamma, 1)}^2 + \|h\nabla v - \mathbf{v}_T\|_{(1, \gamma)}^2\]
Feature edges must map to integer iso-lines to appear as mesh edges in the output. For each feature edge \(\overline{pq}\) aligned with the cross field \(u\)-direction, add the constraint:
\[v_p = v_q \in \mathbb{Z}\]
This ensures the feature edge lies on an integer \(v\) iso-line. Each alignment condition adds one integer variable, handled by the greedy solver.
After the initial parameterization, singularities may be poorly placed — too close to each other, to a boundary, or to a feature. A local search post-process moves each singularity to a neighboring vertex if it reduces \(E_{\text{orient}}\). Moving a singularity along edge \(e_{ij}\) changes the period jump \(p_{ij}\), requiring only a right-hand-side update to the linear system (the matrix is unchanged, so the Cholesky factorization can be reused).
The minimizer of \(E_{\text{orient}}\) can still produce flipped triangles near singularities — the Jacobian determinant goes negative despite the optimal continuous solution. Local stiffening adds an adapted per-triangle weight \(w(T)\) to penalize high distortion:
\[E_{\text{orient}} = \sum_{T \in \mathcal{M}} w(T) \cdot E_T \cdot \text{area}(T)\]
The distortion is characterized by the singular values \(\sigma_1, \sigma_2\) of the Jacobian and the orientation \(\tau = \text{sign}(\det J_T)\):
\[\lambda = |\tau \frac{\sigma_1}{h} - 1| + |\frac{\sigma_2}{h} - 1|\]
The weight is updated iteratively using a smoothed Laplacian of the distortion:
\[w(T) \leftarrow w(T) + \min\{c \cdot |\Delta\lambda(T)|, d\}\]
with \(c = 1\) and maximum update \(d = 5\). After smoothing \(w(T)\), the parameterization is recomputed. This iterative stiffening process handles most degeneracies, though it cannot guarantee that all triangles remain non-degenerate.
Explore how the stiffening weight affects the parameterization near a singularity:
{
const w = 500, h = 420;
const svg = d3.create("svg").attr("viewBox", `0 0 ${w} ${h}`).style("width", "100%").style("max-width", w + "px");
const cx = w/2, cy = 200;
const gridR = 160;
// Singularity at center
svg.append("circle").attr("cx", cx).attr("cy", cy).attr("r", 7).attr("fill", "#e74c3c");
svg.append("text").attr("x", cx + 12).attr("y", cy - 8)
.attr("font-size", 12).attr("fill", "#e74c3c").text("singularity");
// Draw deformed grid around singularity
// As stiffening increases, grid becomes more regular (less distortion)
const nLines = 8;
const distortion = Math.max(0, 1 - stiffening_iter * 0.18);
for (let i = -nLines/2; i <= nLines/2; i++) {
const t = i / (nLines/2);
// u iso-lines (vertical-ish)
const pts = d3.range(-1, 1.01, 0.05).map(s => {
const angle = Math.atan2(s, t);
const dist = Math.sqrt(t*t + s*s);
// Singularity distortion: lines curve near center
const curvature = distortion * 30 * Math.exp(-dist * 2) * Math.sin(angle * 2);
return [cx + t * gridR + curvature, cy + s * gridR];
});
svg.append("path").attr("d", d3.line()(pts))
.attr("fill", "none").attr("stroke", "steelblue").attr("stroke-width", 1).attr("opacity", 0.6);
// v iso-lines (horizontal-ish)
const pts2 = d3.range(-1, 1.01, 0.05).map(s => {
const angle = Math.atan2(t, s);
const dist = Math.sqrt(t*t + s*s);
const curvature = distortion * 30 * Math.exp(-dist * 2) * Math.sin(angle * 2);
return [cx + s * gridR, cy + t * gridR + curvature];
});
svg.append("path").attr("d", d3.line()(pts2))
.attr("fill", "none").attr("stroke", "#e67e22").attr("stroke-width", 1).attr("opacity", 0.6);
}
// Highlight potentially flipped region
if (stiffening_iter < 2) {
const flipR = 30 * (1 - stiffening_iter * 0.4);
svg.append("circle").attr("cx", cx - 20).attr("cy", cy + 15).attr("r", flipR)
.attr("fill", "rgba(231,76,60,0.15)").attr("stroke", "#e74c3c")
.attr("stroke-width", 1).attr("stroke-dasharray", "3,3");
svg.append("text").attr("x", cx - 20).attr("y", cy + 15 + flipR + 14)
.attr("text-anchor", "middle").attr("font-size", 10).attr("fill", "#e74c3c")
.text("flipped region");
}
const status = stiffening_iter === 0 ? "No stiffening — distortion and flips near singularity" :
stiffening_iter < 3 ? `Iteration ${stiffening_iter}: weight increased, distortion reducing` :
`Iteration ${stiffening_iter}: parameterization well-behaved`;
const statusColor = stiffening_iter < 2 ? "#e74c3c" : "#27ae60";
svg.append("text").attr("x", w/2).attr("y", h - 5).attr("text-anchor", "middle")
.attr("font-size", 13).attr("fill", statusColor)
.text(status);
return svg.node();
}Local stiffening is a heuristic — it increases weights iteratively until the distortion decreases, but it cannot guarantee that all triangles will have positive Jacobian determinant. For fine quad meshes this works well, but for coarse quad layouts (where the target edge length is comparable to singularity spacing), stiffening often fails. This is the core motivation for the Bommes et al. 2013 paper’s convex MIQP formulation.
Putting all the pieces together, the MIQ pipeline is a step-through process:
Step 1/6: Input mesh with sparse directional constraints
The greedy solver is remarkably efficient due to local Gauss-Seidel updates:
| Model | Cross Field | Parameterization | Quad Mesh | ||
|---|---|---|---|---|---|
| Dim | Time | Dim | Time | #Sing | |
| FANDISK | 17820 | 2.2s | 13776 | 2.4s | 30 |
| ROCKERARM | 32843 | 7.6s | 41552 | 25.5s | 36 |
| FERTILITY | 29342 | 6.6s | 27954 | 25.7s | 48 |
| BOTIJO | 25821 | 5.4s | 29994 | 44.0s | 74 |
| BEETLE | 46425 | 13.3s | 34705 | 10.3s | 6 |
The cross field computation is faster despite having more integer variables because rounding a period jump has only local effect (two triangles). Parameterization rounding has global effect (shifting a singularity affects the entire solution), requiring the expensive Cholesky solver.
The greedy solver consistently outperforms direct rounding:
“One limitation of our method is that for coarse quadrangulations of highly complex models with many cross field singularities, the local singularity relocation in the parametrization step is dominating the overall computation time.”
The MIQ method has two fundamental limitations that motivated subsequent work:
Both papers address the same problem — global parameterization for quad meshing — but take fundamentally different approaches to the integer constraints:
| Aspect | MIQ (Bommes et al. 2009) | IGM (Bommes et al. 2013) |
|---|---|---|
| Solver | Greedy rounding (one variable at a time) | Branch-and-cut MIQP (CPLEX) |
| Orientation guarantee | None — heuristic stiffening | Convex trisector constraints (guaranteed) |
| Complexity | Full mesh (\(O(\|V\|)\) variables) | Decimated mesh (\(O(\|S\|)\) variables) |
| Singularity handling | Local relocation post-process | Geodesic separation constraints |
| Fine meshes | Works well (stiffening usually succeeds) | Overkill — greedy is sufficient |
| Coarse layouts | Often fails (degeneracies) | Reliable (guaranteed valid IGM) |
| Runtime | Seconds (fast greedy + local updates) | Seconds to minutes (decimation + MIQP) |
| Quality metric | \(E_T = \|h\nabla u - \mathbf{u}_T\|^2 + \|h\nabla v - \mathbf{v}_T\|^2\) | \(E_\alpha^k = \frac{1}{2}\sum A_t \|R_t^T\nabla f_t - H_t\|_\alpha^k\) |
| Cross-boundary compatibility | \(\text{Rot}_{90}^{u_e}(u,v) + (j_e, k_e)\) | Same: \(R_{90}^{r_{ij}}\mathbf{a} + \mathbf{t}_{ij}\) |
| Integer constraints | Singularities snapped to \(\mathbb{Z}^2\) | Same, plus separator constraints |
MIQ (2009) established the formulation — cross field smoothness + parameterization orientation as mixed-integer quadratic programs, solved by greedy rounding. IGM (2013) kept the formulation but replaced the solver with a global optimizer (branch-and-cut) and added the mathematical machinery (convex orientation constraints, complexity reduction, singularity separation) to make it practical. The transition functions, energy metrics, and integer constraint structure are fundamentally the same; the difference is in how the integer solution is found and what guarantees it provides.
The MIQ parameterization is the direct foundation for our MetricFrameField pipeline’s parameterization stage. The key correspondence:
| MIQ Concept | MetricFrameField Implementation |
|---|---|
| Cross field angles \(\theta_i\) | Cross field from metric-customized connection |
| Period jumps \(p_{ij}\) | Matching numbers across edges |
| Smoothness energy \(E_{\text{smooth}}\) | Connection-based smoothness (angle defect) |
| Orientation energy \(E_{\text{orient}}\) | Parameterization energy with target metric \(H_t\) |
| Greedy mixed-integer solver | Greedy rounding of period jumps and translations |
| Cut graph construction | Dual spanning tree + singularity paths |
| Local stiffening | Not yet implemented (potential improvement) |
The main difference: MIQ uses a fixed Euclidean metric with isotropic/anisotropic sizing (\(h\) and \(\gamma\)), while our approach computes a custom Riemannian metric via optimization (Jiang et al. 2015) that simultaneously encodes the desired sizing, orientation, and singularity structure. The parameterization stage then uses the MIQ-style greedy solver to find the integer constraints.