Sharp Thresholds for a Random Constraint Satisfaction Problem

Show more

1. Introduction

The constraint satisfaction problem (CSP in short), originated from the artificial intelligence, has become an important topic in the interdisciplinary research of computer science, mathematics and statistical physics. Many problems in the fields of artificial intelligence, computer science and automatic control can be modeled as constraint satisfaction problems. Moreover, CSPs are widely used in many practical problems such as resource allocation, pattern recognition, logistics scheduling and temporal reasoning.

In general, CSP is defined on a set of variables and a set of constraints. Each variable has a corresponding non-empty domain, the domain size of the variable can be fixed, or vary with the number of variables. Each constraint involves a randomly selected subset of variables and a corresponding compatible assignments set to specify the allowable combinations of values of the variables in this constraint. The randomly selected constraints constitute a random CSP instance. An assignment that satisfies all the constraints simultaneously is called a solution of the CSP instance. Interestingly, experimental results suggest that the probability of a random CSP instance having a solution exhibits a phase transition behavior. In a seminal paper, Cheeseman et al. showed empirically that the hardest instances of CSPs often occur around a rapid transition in solubility [1] . Since then, the phase transition phenomenon and its formation mechanism of CSPs have become one of the focuses of computational complexity theory [2] - [6] . The initial standard models for binary random CSP are A, B, C, D models [7] [8] . However, as the number of variables increases, the instances of standard models which contain flawed variables turned out to be asymptotically trivially insoluble, thus these models don’t have an asymptotic phase transition [9] . To overcome this shortcoming, some specific structures were introduced into the constraint relations such that the instances generated from these improved CSP models are arc consistent, path consistent, strongly 3-consistent or weakly 4-consistent [10] [11] [12] [13] . Although these improved models can eliminate flaws and produce nontrivial hard instances, the constraints are not generated in an easy natural way. In 2000, Xu and Li proposed RB model [14] , which is a modification of the standard model B [7] in terms of the domain size and the number of constraints. RB model is a typical random CSP model with large growing domain size which makes it overcomes the shortcoming of B model which cannot produce hard instances. Xu and Li have also shown that RB model can exhibit exact phase transition and the location of the transition point can be located precisely [14] . Moreover, Xu et al. proved theoretically and experimentally that the random instances of RB model had exponential tree-resolution complexity in the phase transition region, i.e., there are a lot of hard instances in the transition region [15] [16] , which has great practical significance for algorithm test. In 2011, Zhao and Zheng [17] introduced the finite-size scaling method in the statistical physics to analyze the threshold behaviors in RB model, and gave the upper bound of the scale window of the transition region of RB model. Inspired by the random k-SAT with moderately growing k [18] and RB model, Fan and Shen proposed a new CSP model, named k-CSP [19] . In k-CSP, the domain size is fixed while the length of constraint k is growing with the variable number n. After k-CSP, Fan et al. proposed d-k-CSP [20] , in d-k-CSP, both the domain size d and the length of constraint k are variable as two integer functions of n. It has been proved rigorously that the two CSP models do have phase transition and the exact transition point can be located exactly. In 2011, Zhao et al. proposed a message-passing algorithm, which are based on belief propagation, to solve the random CSP instances generated by RB model with large growing domain size [21] . Subsequently, Zhao et al. put forward a belief propagation algorithm [22] which is based on variable entropy and a reinforced belief propagation algorithm [23] to solve the random instances of RB model. Furthermore, they identify the connection between the structural features of the solution space and the complexity of the algorithm solving.

In this paper, we propose a new random CSP model, called d-p-RB model, which is a generalization of RB model on constraint tightness p and the variable domain size d. In RB model, the domain size $d={n}^{\alpha}$ (α is a constant) is a power function of the number of variables n, and the constraint tightness p is fixed. In d-p-RB model, we uniformly divided the random constraints into several groups and diversify the domain size d as well as the constraint tightness p of the constraints in different groups. More specifically, for an instance with n variables in d-p-RB model, the domain size $d\in \left[{n}^{\alpha},{n}^{{n}^{\gamma}}\right]$ (α, γ are constants) is defined within a certain range rather than a single value as in RB model, for the ith group of constraints, it has its own constraint tightness ${p}_{i}$ ( $0<{p}_{i}<1$ ), which is distinct from the unchangeable p in RB model. By the second moment method, we show that the d-p-RB model can exhibit exact phase transition phenomenon under certain conditions, and the transition point can also be obtained pricey. Moreover, since both d and p are varied in d-p-RB model, it has more extensive practical significance and theoretical value.

2. Preliminaries

2.1. A CSP Instance

A CSP instance $I=\left(U,D,C\right)$ of d-p-RB model is defined as follows:

1) $U=\left\{{u}_{1},{u}_{2},\cdots ,{u}_{n}\right\}$ is a set of n variables.

2) $D=\left\{{D}_{1},{D}_{2},\cdots ,{D}_{n}\right\}$ is a domain set. Each variable ${u}_{i}$ ( $i=1,2,\cdots ,n$ ) takes values from ${D}_{i}$ , whose size $\left|{D}_{i}\right|=d\in \left[{n}^{\alpha},{n}^{{n}^{\lambda}}\right]$ , where α and γ are constants.

3) $C=\left\{{C}_{1},{C}_{2},\cdots ,{C}_{t}\right\}$ is a set of constraints, and each constraint ${C}_{i}$ is a pair ( ${S}_{i},{R}_{i}$ ), where ${S}_{i}\text{=}\left\{{s}_{{i}_{1}},{s}_{{i}_{2}},\cdots ,{s}_{{i}_{k}}\right\}$ (k is the length of the constraint) is a subset of U, and ${R}_{i}\subseteq {D}_{{i}_{1}}\times {D}_{{i}_{2}}\times \cdots \times {D}_{{i}_{k}}$ is a compatible assignments set.

A constraint ${C}_{i}$ is satisfied if the k-tuple of values assigned to variables in ${S}_{i}$ is contained in ${R}_{i}$ . A solution of a CSP instance is an assignment to all the variables that satisfies all constraints.

2.2. d-p-RB Model

A random CSP instance in d-p-RB model is generated in the following two steps:

Step 1. We select with repetition l groups of constraints. For each group, there are $t/l$ constraints with each contains k variables, which are randomly select from u, and distinct from each other.

Step 2. For each group of constraints, we uniformly select at random without repetition ${p}_{i}{d}^{k}$ ( $0<{p}_{i}<1$ is the constraint tightness) compatible assignments to form the compatible assignments set ${R}_{i}$ ( $i=1,2,\cdots ,l$ ).

3. Main Result

We let $t=rn\mathrm{ln}d$ , where r is a constant control parameter, which determines how many constraints are in a CSP instance. Let $p=\mathrm{min}\left\{{p}_{1},{p}_{2},\cdots ,{p}_{l}\right\}$ , where ${p}_{i}$ ( $i=1,2,\cdots ,l$ ) determines how restrictive the constraints are. Let $\mathrm{Pr}\left(sat\right)$ denote the probability of a random d-p-RB instance being satisfiable, then we have the following theorem.

Theorem Let ${r}_{m}=-\frac{l}{{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}{p}_{i}}}$ ( $0<{p}_{i}<1$ ), if the constants k, p, α, γ satisfy the relations $\alpha >\frac{\gamma +1}{k-1}$ , $k\ge \mathrm{max}\left\{\frac{1}{p},\gamma +2\right\}$ then

$\underset{n\to \infty}{\mathrm{lim}}\mathrm{Pr}\left(sat\right)=\{\begin{array}{c}0\\ 1\end{array}\text{\hspace{1em}}\begin{array}{c}\text{when}\text{\hspace{0.17em}}r>{r}_{m}\\ \text{when}\text{\hspace{0.17em}}r<{r}_{m}\end{array}\text{\hspace{1em}}\text{\hspace{1em}}\text{}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\begin{array}{c}(1)\\ (2)\end{array}$

The theorem shows that, when the number of variables n is sufficiently large, there exists a sudden shift in ${r}_{m}$ .

4. Proof of the Theorem

Let N denote the number of solutions of a random CSP instance I. The expectation and the second moment of N is denoted by $E\left(N\right)$ and $E\left({N}^{2}\right)$ . When $r>{r}_{m}$ , we consider the Markov inequality $\mathrm{Pr}\left(sat\right)\le E\left(N\right)$ . When $r<{r}_{m}$ , by

the second moment method, we estimate the upper bound of ${\scriptscriptstyle \frac{E\left({N}^{2}\right)}{{E}^{2}\left(N\right)}}$ , and then by the Cauchy inequality $\mathrm{Pr}\left(sat\right)\ge \frac{{E}^{2}\left(N\right)}{E\left({N}^{2}\right)}$ , we finally attain our goal. Now we demonstrate the two cases respectively.

4.1. Proof of r > r_{m}

Since the constraints are generated independently in d-p-RB model, the expected number of solutions $E\left(N\right)$ is given by

$\begin{array}{c}E\left(N\right)={d}^{n}{\left({p}_{1}{p}_{2}\cdots {p}_{l}\right)}^{\frac{t}{l}}\\ =\mathrm{exp}\left(n\mathrm{ln}d+\frac{r}{l}n\mathrm{ln}d{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}{p}_{i}}\right)\\ =\mathrm{exp}\left[n\mathrm{ln}d\left(1+\frac{r}{l}{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}{p}_{i}}\right)\right]\end{array}$ . (3)

Since $r>{r}_{m}$ , we have

$1+\frac{r}{l}{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}{p}_{i}}<0$ , (4)

thus

$\underset{n\to \infty}{\mathrm{lim}}E\left(N\right)=0$ . (5)

Then using the Markov inequality $\mathrm{Pr}\left(sat\right)\le E\left(N\right)$ , by (5), it’s not hard to have

$\underset{n\to \infty}{\mathrm{lim}}\mathrm{Pr}\left(Sat\right)=0\text{\hspace{0.17em}}$ . (6)

4.2. Proof of r < r_{m}

Definition 1 (The assignment pair) Suppose that the assignment pair $\langle {t}_{i},{t}_{j}\rangle $ is an ordered pair, where ${t}_{i}=\left({a}_{i1},{a}_{i2},\cdots ,{a}_{in}\right)$ , ${t}_{j}=\left({a}_{j1},{a}_{j2},\cdots ,{a}_{jn}\right)$ , and ${a}_{ih},{a}_{jh}\in {D}_{h}$ ( $h=1,2,\cdots ,n$ ). An assignment pair $\langle {t}_{i},{t}_{j}\rangle $ satisfies a CSP instance if and only if both ${t}_{i}$ and ${t}_{j}$ satisfy the instance.

Definition 2 (The similarity number) Define a function as follows

$Sam\left({a}_{ih},{a}_{jh}\right)=\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{ih}={a}_{jh}\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{ih}\ne {a}_{jh}\end{array}$

Assume $m={\displaystyle \underset{h=1}{\overset{n}{\sum}}sam\left({a}_{ih},{a}_{jh}\right)}$ , thus the assignment pair $\langle {t}_{i},{t}_{j}\rangle $ has m

identical assignments, i.e. the similarity number of $\langle {t}_{i},{t}_{j}\rangle $ is m. It is obvious that $0\le m\le n$ .

Next, we use the second moment method to complete the proof.

Assume that $P\left(\langle {t}_{i},{t}_{j}\rangle \right)$ represents the probability that ${t}_{i}$ and ${t}_{j}$ satisfy the instant I simultaneously. We analyse this probability in the following way:

Since there are m identical assignments in ${t}_{i}$ and ${t}_{j}$ , for each constraint, we have the following two cases:

1) The assignments of k variables that the constraint restricts are all same in ${t}_{i}$ and ${t}_{j}$ , in this case, the probability of $\langle {t}_{i},{t}_{j}\rangle $ satisfying the constraint is

$\frac{\left({}_{{p}_{i}{d}^{k}-1}^{\text{}{d}^{k}-1}\right)}{\left({}_{{p}_{i}{d}^{k}}^{\text{}{d}^{k}}\right)}={p}_{i}$ , and for a random constraint, the probability of such a situation is $\frac{\left(\begin{array}{c}m\\ k\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)}$ .

2) Otherwise, the probability of $\langle {t}_{i},{t}_{j}\rangle $ satisfying the constraint is

$\frac{\left({}_{{p}_{i}{d}^{k}-2}^{\text{}{d}^{k}-2}\right)}{\left({}_{{p}_{i}{d}^{k}}^{\text{}{d}^{k}}\right)}={p}_{i}\frac{{p}_{i}{d}^{k}-1}{{d}^{k}-1}$ , and the probability that $\langle {t}_{i},{t}_{j}\rangle $ falls into such a situation is $1-\frac{\left(\begin{array}{c}m\\ k\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)}$ .

Let

${\sigma}_{m,n}=\frac{\left(\begin{array}{c}m\\ k\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)},\text{}s=\frac{m}{n}.$ (7)

Since

${\sigma}_{m,n}=\frac{\left(\begin{array}{c}m\\ k\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)}=\frac{m\left(m-1\right)\cdots \left(m-k+1\right)}{n\left(n-1\right)\cdots \left(n-k+1\right)}\le {\left(\frac{m}{n}\right)}^{k}$ (8)

we have

${\sigma}_{m,n}\le {s}^{k}.$ (9)

Since the constraints are generated independently, the assignment pair $\langle {t}_{i},{t}_{j}\rangle $ satisfying all the constraints in random instance i is

$\begin{array}{c}P\left(\langle {t}_{i},{t}_{j}\rangle \right)={\displaystyle \underset{i=1}{\overset{l}{\prod}}{\left[{p}_{i}{\sigma}_{m,n}+{p}_{i}\frac{{p}_{i}{d}^{k}-1}{{d}^{k}-1}\left(1-{\sigma}_{m,n}\right)\right]}^{\frac{t}{l}}}\\ \le {\displaystyle \underset{i=1}{\overset{l}{\prod}}{p}_{i}^{\frac{t}{l}}{\left[{\sigma}_{S,n}+{p}_{i}\left(1-{\sigma}_{S,n}\right)\right]}^{\frac{t}{l}}}\\ \le {\displaystyle \underset{\text{i=}1}{\overset{l}{\prod}}{p}_{i}^{\frac{t}{l}}{\left[{p}_{i}+\left(1-{p}_{i}\right){s}^{k}\right]}^{\frac{t}{l}}}\\ \le {\displaystyle \underset{i=1}{\overset{l}{\prod}}{p}_{i}^{\frac{2t}{l}}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{t}{l}}}.\end{array}$ (10)

Let ${A}_{m}$ be the set of assignment pairs whose similarity number is m, $\left|{A}_{m}\right|$ be the cardinality of ${A}_{m}$ , then we have

$\left|{A}_{m}\right|={d}^{n}\left(\begin{array}{c}n\\ m\end{array}\right){\left(d-1\right)}^{n-m}={d}^{2n}\left(\begin{array}{c}n\\ m\end{array}\right){\left(1-\frac{1}{d}\right)}^{n-m}{\left(\frac{1}{d}\right)}^{m}$ . (11)

Thus by (10) and (11), the second order moment of the number of solutions of the random instance of d-p-RB model is

$\begin{array}{c}E\left({N}^{2}\right)={\displaystyle \underset{m=0}{\overset{n}{\sum}}\left|{A}_{m}\right|}P\left(\langle {t}_{i},{t}_{j}\rangle \right)\\ \le {\displaystyle \underset{m=0}{\overset{n}{\sum}}{d}^{2n}\left(\begin{array}{c}n\\ m\end{array}\right){\left(1-\frac{1}{d}\right)}^{n-m}{\left(\frac{1}{d}\right)}^{m}{\displaystyle \underset{i=1}{\overset{l}{\prod}}{p}_{i}^{\frac{2t}{l}}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{t}}}\\ \text{=}{E}^{2}\left(N\right){\displaystyle \underset{m=0}{\overset{n}{\sum}}\left(\begin{array}{c}n\\ m\end{array}\right){\left(1-\frac{1}{d}\right)}^{n-m}{\left(\frac{1}{d}\right)}^{m}{\displaystyle \underset{i=1}{\overset{l}{\prod}}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{t}{l}}}}\\ ={E}^{2}\left(N\right){\displaystyle \underset{0\le s\le 1}{\sum}B\left(s\right)W\left(s\right)},\end{array}$ (12)

where

$B\left(s\right)=\left({}_{ns}^{\text{}n}\right){\left(1-\frac{1}{d}\right)}^{n-ns}{\left(\frac{1}{d}\right)}^{ns}$ (13)

$W\left(s\right)={\displaystyle \underset{i=1}{\overset{l}{\prod}}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{t}{l}}}$ , (14)

i.e.,

$\frac{E\left({N}^{2}\right)}{{E}^{2}\left(N\right)}\le {\displaystyle \underset{0\le s\le 1}{\sum}B\left(s\right)W}\left(s\right)$ . (15)

Considering that $0\le s\le 1$ , in order to evaluate the upper bound of the above Inequality (15), we divide the interval [0,1] into three parts: $\left[0,{s}_{1}\right]$ , $\left[{s}_{1},{s}_{2}\right]$ ,

$\left[{s}_{2},1\right]$ , where ${s}_{1}=\frac{1}{{n}^{\beta}}$ , ${s}_{2}=\frac{1}{{n}^{\frac{\gamma +1}{k-1}}}$ , here β and γ satisfy $\frac{\gamma +1}{k-1}<\beta <\mathrm{min}\left\{1,\alpha \right\}$ .

1) For $s\in \left[0,{s}_{1}\right]$ , let ${s}_{1}=\frac{1}{{n}^{\beta}}$ , where $\frac{\gamma +1}{k-1}<\beta <\mathrm{min}\left\{1,\alpha \right\}$ , recalling that

$t=rn\mathrm{ln}d$ , $d\in \left[{n}^{\alpha},{n}^{{n}^{\gamma}}\right]$ , and $p=\mathrm{min}\left\{{p}_{1},{p}_{2},\cdots ,{p}_{l}\right\}$ we have

$\begin{array}{c}W\left(s\right)={\displaystyle \underset{i=1}{\overset{l}{\prod}}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{rn\mathrm{ln}d}{l}}}\\ \le {\left(1+\frac{1-p}{p}{s}^{k}\right)}^{rn\mathrm{ln}d}\\ \le \mathrm{exp}\left[rn\mathrm{ln}d\mathrm{ln}\left(1+\frac{1-p}{p}{s}^{k}\right)\right]\\ \le \mathrm{exp}\left(rn\mathrm{ln}d\cdot \frac{1-p}{p}\cdot {s}^{k}\right)\\ \le \mathrm{exp}\left[\frac{r\left(1-p\right)}{p}{n}^{\gamma +1-\beta k}\mathrm{ln}n\right].\end{array}$ (16)

Since $\beta >\frac{\gamma +1}{k-1}>\frac{\gamma +1}{k}$ , $\gamma +1-\beta k<0$ , we get

$\underset{n\to \infty}{\mathrm{lim}}W\left(s\right)=1$ . (17)

Then it is not hard to obtain that

$\underset{0\le s\le {s}_{1}}{\sum}B\left(s\right)}W\left(s\right)\le {\displaystyle \underset{0\le s\le {s}_{1}}{\sum}B\left(s\right)}\le {\displaystyle \underset{0\le s\le 1}{\sum}B\left(s\right)}=1$ . (18)

2) For $s\in \left[{s}_{1},{s}_{2}\right]$ , let ${s}_{2}=\frac{1}{{n}^{\frac{\gamma +1}{k-1}}}$ , $f\left(s\right)=-s\mathrm{ln}s-\left(1-s\right)\mathrm{ln}\left(1-s\right)$ , by the Stirling’s formula $n!={\left(\frac{n}{\text{e}}\right)}^{n}\sqrt{2\text{\pi}n}{\text{e}}^{\frac{\epsilon}{12n}}$ , where $\left|\epsilon \right|<1$ , it’s not hard to see

$\left(\begin{array}{c}n\\ ns\end{array}\right)<{\text{e}}^{nh\left(s\right)}$ . (19)

Since

$B\left(s\right)W\left(s\right)\le \left({}_{ns}^{\text{}n}\right){\left(1-\frac{1}{d}\right)}^{n-ns}{\left(\frac{1}{d}\right)}^{ns}{\left(1+\frac{1-p}{p}{s}^{k}\right)}^{rn\mathrm{ln}d}$ (20)

we get

$\begin{array}{c}\mathrm{ln}B\left(s\right)W\left(s\right)\le nf\left(s\right)+\left(n-ns\right)\mathrm{ln}\left(1-\frac{1}{d}\right)-ns\mathrm{ln}d+\frac{r}{l}n\mathrm{ln}d\mathrm{ln}\left(1+\frac{1-p}{p}{s}^{k}\right)\\ \le n\left[f\left(s\right)-s\mathrm{ln}d+\frac{r\left(1-p\right)}{lp}\cdot \mathrm{ln}d\cdot {s}^{k}\right]\\ \le ns\left[-\mathrm{ln}s-\frac{1-s}{s}\mathrm{ln}\left(1-s\right)-\alpha \mathrm{ln}n+\frac{r\left(1-p\right)}{lp}\cdot {n}^{\gamma}\cdot \mathrm{ln}n\cdot {s}^{k-1}\right].\end{array}$ (21)

For $s\in \left[{s}_{1},{s}_{2}\right]$ , we have

$\begin{array}{c}\mathrm{ln}B\left(s\right)W\left(s\right)\le ns\left[\beta \mathrm{ln}n+1-\alpha \mathrm{ln}n+\frac{r\left(1-p\right)}{lp}\cdot {n}^{\gamma}\cdot \mathrm{ln}n\cdot {\left(\frac{1}{{n}^{\frac{\gamma +1}{k-1}}}\right)}^{k-1}\right]\\ \text{=}ns\mathrm{ln}n\left[\beta -\alpha +\frac{1}{\mathrm{ln}n}+\frac{r\left(1-p\right)}{lp}\cdot \frac{1}{n}\right]\\ \le {n}^{1-\beta}\mathrm{ln}n\left[\beta -\alpha +o\left(\frac{1}{\mathrm{ln}n}\right)\right].\end{array}$ (22)

Since $\beta <\mathrm{min}\left\{1,\alpha \right\}$ , we have

$\underset{n\to \infty}{\mathrm{lim}}\mathrm{ln}B\left(s\right)W\left(s\right)=-\infty $ , (23)

hence

$\begin{array}{c}{\displaystyle \underset{{s}_{1}\le s\le {s}_{2}}{\sum}B\left(s\right)W\left(s\right)}\le nB\left(s\right)W\left(s\right)\\ \le n\mathrm{exp}\left[{n}^{1-\beta}\mathrm{ln}n\left(\beta -\alpha +o\left(\frac{1}{\mathrm{ln}n}\right)\right)\right]\\ ={n}^{{n}^{1-\beta}\left(\beta -\alpha +o\left(\frac{1}{\mathrm{ln}n}\right)\right)+1}\to 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(n\to \infty \right).\end{array}$ (24)

Thus, for arbitrary small ε, there exists an integer ${N}_{1}>0$ , such that

$\underset{{s}_{1}\le s\le {s}_{2}}{\sum}B\left(s\right)}W\left(s\right)<\frac{\epsilon}{2$ . (25)

3) For $s\in \left[{s}_{2},1\right]$ we have

$B\left(s\right)W\left(s\right)=\left({}_{ns}^{\text{}n}\right){\left(1-\frac{1}{d}\right)}^{n-ns}{\left(\frac{1}{d}\right)}^{ns}{\displaystyle \underset{i=1}{\overset{l}{\prod}}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{rn\mathrm{ln}d}{l}}}$ (26)

Then

$\begin{array}{c}\mathrm{ln}B\left(s\right)W\left(s\right)\le nf\left(s\right)-ns\mathrm{ln}d+\frac{r}{l}n\mathrm{ln}d{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}\\ =n\left[f\left(s\right)+\mathrm{ln}d\left(-s+\frac{r}{l}{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}\right)\right].\end{array}$ (27)

Let $g\left(s\right)=\frac{r}{l}{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}-s$ , differentiating $g\left(s\right)$ with respect to s, we get

${g}^{\prime}\left(s\right)=\frac{r}{l}{\displaystyle \underset{i=1}{\overset{l}{\sum}}\frac{k\left(1-{p}_{i}\right){s}^{k-1}}{{p}_{i}+\left(1-{p}_{i}\right){s}^{k}}}-1$ , (28)

and then

${g}^{\u2033}\left(s\right)=\frac{r}{l}{\displaystyle \underset{i=1}{\overset{l}{\sum}}\frac{k\left(1-{p}_{i}\right){s}^{k-2}\left[\left(k-1\right){p}_{i}+\left(1-{p}_{i}\right){s}^{k}\right]}{{\left[{p}_{i}+\left(1-{p}_{i}\right){s}^{k}\right]}^{2}}}$ . (29)

By the condition $k\ge \frac{1}{p}$ , we have ${g}^{\u2033}\left(s\right)\ge 0$ , which implies that $g\left(s\right)$ is convex for $s\in \left[0,1\right]$ . Note that $g\left(0\right)=0$ and $g\left(1\right)=-\frac{r}{l}{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}{p}_{i}}-1<0$ for $r<{r}_{m}=-l/{\displaystyle \underset{i=1}{\overset{l}{\sum}}\mathrm{ln}{p}_{i}}$ , therefore we get $g\left(s\right)<0$ for $s\in \left[{s}_{2},1\right]$ . Let ${\mathrm{max}}_{s\in \left[{s}_{2},1\right]}g\left(s\right)=-M$ , where $M>0$ is a constant.

For $f\left(s\right)=-s\mathrm{ln}s-\left(1-s\right)\mathrm{ln}\left(1-s\right)$ , similarly we have

${f}^{\prime}\left(s\right)=-\mathrm{ln}s+\mathrm{ln}\left(1-s\right)$ (30)

${f}^{\u2033}\left(s\right)=-\frac{1}{s\left(1-s\right)}<0$ . (31)

So $f\left(s\right)$ is concave and has the maximum value $\mathrm{ln}2$ at $s=\frac{1}{2}$ . Thus we have

$\mathrm{ln}B\left(s\right)W\left(s\right)\le n\left[f\left(s\right)+\mathrm{ln}d\cdot g\left(s\right)\right]\le n\left(\mathrm{ln}2-M\mathrm{ln}d\right).$ (32)

So we get

$\underset{{s}_{2}\le s\le 1}{\sum}B\left(s\right)W\left(s\right)}\le nB\left(s\right)W\left(s\right)\le n\mathrm{exp}\left(n\mathrm{ln}2-M\mathrm{ln}d\right)\le {2}^{n}{n}^{-\alpha Mn+1$ , (33)

hence

$\underset{n\to \infty}{\mathrm{lim}}{\displaystyle \underset{{s}_{2}\le s\le 1}{\sum}B\left(s\right)W\left(s\right)}=0$ , (34)

i.e., for $s\in \left[{s}_{2},1\right]$ , there exists an integer ${N}_{2}>0$ , such that

$\underset{{s}_{2}\le s\le 1}{\sum}B\left(s\right)W\left(s\right)}<\frac{\epsilon}{2$ . (35)

Summarizing the above, from (18), (25), (35), letting $N=\mathrm{max}\left\{{N}_{1},{N}_{2}\right\}$ , we obtain

$\underset{0\le s\le 1}{\sum}B\left(s\right)W\left(s\right)}<1+\epsilon $ . (36)

Thus we have

$\frac{E\left({N}^{2}\right)}{{E}^{2}\left(N\right)}\le 1\text{+}\epsilon $ , (37)

then by the Cauchy inequality $\mathrm{Pr}\left(sat\right)\ge \frac{{E}^{2}\left(N\right)}{E\left({N}^{2}\right)}$ , we have

$\frac{1}{1+\epsilon}\le \mathrm{Pr}\left(sat\right)\le 1$ , (38)

then we get

$\underset{n\to \infty}{\mathrm{lim}}\mathrm{Pr}\left(sat\right)=1$ . （39）

Thus the theorem is proved.

So far we have demonstrated the satisfiability phase transition in theory. From the proof of the theorem, it can be seen that when the control parameter r is less than the transition point ${r}_{m}$ , the probability of a CSP instance being satisfied tends to 1, while the control parameter r is greater than the transition point ${r}_{m}$ , the probability tends to 0. Thus there exists a sharp threshold in the CSP instances generated by d-p-RB model.

5. Conclusion

In this paper, we propose a new CSP model d-p-RB. Compared with RB model, we diversify the constraint tightness p and broaden the domain size d. By the method of second moment, we proved that there indeed exist satisfiability phase transition phenomenon and the transition point can also be located exactly.

References

[1] Cheesema, P., Kanefsky, B. and Taylor, W.M. (1991) Where the Really Hard Problems Are. Proceedings of the 12th International Joint Conference on Artificial Intelligence, Sydney, 24-30 August 1991, 331-337.

[2] Prosser, P. (1996) An Empirical Study of Phase Transitions in Binary Constraint Satis-faction Problems. Artificial Intelligence, 81, 81-109.

https://doi.org/10.1016/0004-3702(95)00048-8

[3] Friedgut, E. and Bourgain, J. (1999) Sharp Thresholds of Graph Properties, and the k-SAT Problem. Journal of the American Mathematical Society, 12, 1017-1054.

https://doi.org/10.1090/S0894-0347-99-00305-7

[4] Smith, B.M. (2001) Constructing an Asymptotic Phase Transition in Random Binary Constraint Satisfaction Problems. Theoretical Computer Science, 265, 265-283.

https://doi.org/10.1016/S0304-3975(01)00166-9

[5] Friedgut, E. (2005) Hunting for Sharp Thresholds. Random Structures and Algorithms, 26, 37-51. https://doi.org/10.1002/rsa.20042

[6] Frieze, A.M. and Molloy, M. (2006) The Satisfiability Threshold for Randomly Generated Binary Constraint Satisfaction Problems. Random Structures and Algorithms, 28, 323-339.

https://doi.org/10.1002/rsa.20118

[7] Smith, B.M. and Dyer, M.E. (1996) Locating the Phase Transition in Binary Constraint Satisfaction Problems. Artificial Intelligence, 81, 155-181.

https://doi.org/10.1016/0004-3702(95)00052-6

[8] Gent, I., Macintyre, E., Prosser, P. and Smith, B. (2001) Random Constraint Satisfaction: Flaws and Structure. Constraints, 6, 345-372.

https://doi.org/10.1023/A:1011454308633

[9] Achlioptas, D., Kirousis, L., Kranakis, E., Krizanc, D., Molloy, M. and Stamatiou, Y. (1997) Random Constraint Satisfaction: A More Accurate Picture. Proceedings of the Third International Conference on Principles and Practice of Constraint Programming, Austria, 29 October-1 November 1997, 107-120.

https://doi.org/10.1007/BFb0017433

[10] Molloy, M. (2003) Models for Random Constraint Satisfaction Problems. SIAM Journal of Computing, 32, 935-949.

https://doi.org/10.1137/S0097539700368667

[11] Yong, G. and Joseph, C. (2004) Consistency and Random Constraint Satisfaction Models with a High Constraint Tightness. Proceedings of the 10th International Conference on Principles and Practice of Constraint Programming, 17-31.

https://doi.org/10.1007/978-3-540-30201-8_5

[12] Yong, G. and Joseph, C. (2007) Consistency and Random Constraint Satisfaction Models. Journal of Artificial Intelligence Research, 28, 517-557.

[13] Achlioptas, D., Kirousis, L., Kranakis, E., Krizanc, D., Molloy, M. and Stamatiou, Y. (2001) Random Constraint Satisfaction: A More Accurate Picture. Constraints, 6, 329-344.

https://doi.org/10.1023/A:1011402324562

[14] Xu, K. and Li, W. (2000) Exact Phase Transitions in Random Constraint Satisfaction Problems. Journal of Artificial Intelligence Research, 12, 93-103.

[15] Xu, K. and Li, W. (2006) Many Hard Examples in Exact Phase Transitions. Theoretical Computer Science, 355, 291-302.

[16] Xu, K., Boussemart, F., Hemery, F. and Lecoutre, C. (2007) Random Constraint Satisfac-tion: Easy Generation of Hard Satisfiable Instances. Artificial Intelligence, 171, 514-534.

[17] Zhao, C. and Zheng, Z. (2011) Threshold Behaviors of a Random Constraint Satisfaction Problem with Exact Phase Transition. Information Processing Letters, 111, 985-988.

[18] Frieze, A. and Wormald, N.C. (2005) Random k-sat: A Tight Threshold for Moderately Growing k. Combinatorica, 25, 297-305.

https://doi.org/10.1007/s00493-005-0017-3

[19] Fan, Y. and Shen, J. (2011) On the Phase Transitions of Random k-Constraint Satisfaction Problems. Artificial Intelligence, 175, 914-927.

[20] Fan, Y., Shen, J. and Xu, K. (2012) A General Model and Thresholds for Random Constraint Satisfaction Problems. Artificial Intelligence, 193, 1-17.

[21] Zhao, C., Zhou, H., Zheng, Z. and Xu, K. (2011) A Message-Passing Approach to Random Constraint Satisfaction Problems with Growing Domains. Journal of Statistical Me-chanics: Theory and Experiment, 2011, 1742-5468.

https://doi.org/10.1088/1742-5468/2011/02/P02019

[22] Zhao, C. and Zheng, Z. (2012) A Belief-Propagation Algorithm Based on Variable Entropy for Constraint Satisfaction Problems. Chinese Science: Information Science, 42, 1170-1180.

[23] Zhao, C., Zhang, P., Zheng, Z. and Xu, K. (2012) Analytical and Belief-Propagation Studies of Random Constraint Satisfaction Problems with Growing Domains. Physical Review E Statistical Nonlinear & Soft Matter Physics, 85, Article ID: 016106.

https://doi.org/10.1103/PhysRevE.85.016106