ay="inline" xmlns="http://www.w3.org/1998/Math/MathML"> C i is a pair ( ${S}_{i},{R}_{i}$ ), where ${S}_{i}\text{=}\left\{{s}_{{i}_{1}},{s}_{{i}_{2}},\cdots ,{s}_{{i}_{k}}\right\}$ (k is the length of the constraint) is a subset of U, and ${R}_{i}\subseteq {D}_{{i}_{1}}×{D}_{{i}_{2}}×\cdots ×{D}_{{i}_{k}}$ is a compatible assignments set.

A constraint ${C}_{i}$ is satisfied if the k-tuple of values assigned to variables in ${S}_{i}$ is contained in ${R}_{i}$ . A solution of a CSP instance is an assignment to all the variables that satisfies all constraints.

2.2. d-p-RB Model

A random CSP instance in d-p-RB model is generated in the following two steps:

Step 1. We select with repetition l groups of constraints. For each group, there are $t/l$ constraints with each contains k variables, which are randomly select from u, and distinct from each other.

Step 2. For each group of constraints, we uniformly select at random without repetition ${p}_{i}{d}^{k}$ ( $0<{p}_{i}<1$ is the constraint tightness) compatible assignments to form the compatible assignments set ${R}_{i}$ ( $i=1,2,\cdots ,l$ ).

3. Main Result

We let $t=rn\mathrm{ln}d$ , where r is a constant control parameter, which determines how many constraints are in a CSP instance. Let $p=\mathrm{min}\left\{{p}_{1},{p}_{2},\cdots ,{p}_{l}\right\}$ , where ${p}_{i}$ ( $i=1,2,\cdots ,l$ ) determines how restrictive the constraints are. Let $\mathrm{Pr}\left(sat\right)$ denote the probability of a random d-p-RB instance being satisfiable, then we have the following theorem.

Theorem Let ${r}_{m}=-\frac{l}{\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}{p}_{i}}$ ( $0<{p}_{i}<1$ ), if the constants k, p, α, γ satisfy the relations $\alpha >\frac{\gamma +1}{k-1}$ , $k\ge \mathrm{max}\left\{\frac{1}{p},\gamma +2\right\}$ then

$\underset{n\to \infty }{\mathrm{lim}}\mathrm{Pr}\left(sat\right)=\left\{\begin{array}{c}0\\ 1\end{array}\text{ }\begin{array}{c}\text{when}\text{\hspace{0.17em}}r>{r}_{m}\\ \text{when}\text{\hspace{0.17em}}r<{r}_{m}\end{array}\text{ }\text{ }\text{}\text{ }\text{ }\text{ }\text{ }\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\begin{array}{c}\left(1\right)\\ \left(2\right)\end{array}$

The theorem shows that, when the number of variables n is sufficiently large, there exists a sudden shift in ${r}_{m}$ .

4. Proof of the Theorem

Let N denote the number of solutions of a random CSP instance I. The expectation and the second moment of N is denoted by $E\left(N\right)$ and $E\left({N}^{2}\right)$ . When $r>{r}_{m}$ , we consider the Markov inequality $\mathrm{Pr}\left(sat\right)\le E\left(N\right)$ . When $r<{r}_{m}$ , by

the second moment method, we estimate the upper bound of $\frac{E\left({N}^{2}\right)}{{E}^{2}\left(N\right)}$ , and then by the Cauchy inequality $\mathrm{Pr}\left(sat\right)\ge \frac{{E}^{2}\left(N\right)}{E\left({N}^{2}\right)}$ , we finally attain our goal. Now we demonstrate the two cases respectively.

4.1. Proof of r > rm

Since the constraints are generated independently in d-p-RB model, the expected number of solutions $E\left(N\right)$ is given by

$\begin{array}{c}E\left(N\right)={d}^{n}{\left({p}_{1}{p}_{2}\cdots {p}_{l}\right)}^{\frac{t}{l}}\\ =\mathrm{exp}\left(n\mathrm{ln}d+\frac{r}{l}n\mathrm{ln}d\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}{p}_{i}\right)\\ =\mathrm{exp}\left[n\mathrm{ln}d\left(1+\frac{r}{l}\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}{p}_{i}\right)\right]\end{array}$ . (3)

Since $r>{r}_{m}$ , we have

$1+\frac{r}{l}\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}{p}_{i}<0$ , (4)

thus

$\underset{n\to \infty }{\mathrm{lim}}E\left(N\right)=0$ . (5)

Then using the Markov inequality $\mathrm{Pr}\left(sat\right)\le E\left(N\right)$ , by (5), it’s not hard to have

$\underset{n\to \infty }{\mathrm{lim}}\mathrm{Pr}\left(Sat\right)=0\text{\hspace{0.17em}}$ . (6)

4.2. Proof of r < rm

Definition 1 (The assignment pair) Suppose that the assignment pair $〈{t}_{i},{t}_{j}〉$ is an ordered pair, where ${t}_{i}=\left({a}_{i1},{a}_{i2},\cdots ,{a}_{in}\right)$ , ${t}_{j}=\left({a}_{j1},{a}_{j2},\cdots ,{a}_{jn}\right)$ , and ${a}_{ih},{a}_{jh}\in {D}_{h}$ ( $h=1,2,\cdots ,n$ ). An assignment pair $〈{t}_{i},{t}_{j}〉$ satisfies a CSP instance if and only if both ${t}_{i}$ and ${t}_{j}$ satisfy the instance.

Definition 2 (The similarity number) Define a function as follows

$Sam\left({a}_{ih},{a}_{jh}\right)=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{ih}={a}_{jh}\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{ih}\ne {a}_{jh}\end{array}$

Assume $m=\underset{h=1}{\overset{n}{\sum }}sam\left({a}_{ih},{a}_{jh}\right)$ , thus the assignment pair $〈{t}_{i},{t}_{j}〉$ has m

identical assignments, i.e. the similarity number of $〈{t}_{i},{t}_{j}〉$ is m. It is obvious that $0\le m\le n$ .

Next, we use the second moment method to complete the proof.

Assume that $P\left(〈{t}_{i},{t}_{j}〉\right)$ represents the probability that ${t}_{i}$ and ${t}_{j}$ satisfy the instant I simultaneously. We analyse this probability in the following way:

Since there are m identical assignments in ${t}_{i}$ and ${t}_{j}$ , for each constraint, we have the following two cases:

1) The assignments of k variables that the constraint restricts are all same in ${t}_{i}$ and ${t}_{j}$ , in this case, the probability of $〈{t}_{i},{t}_{j}〉$ satisfying the constraint is

$\frac{\left({}_{{p}_{i}{d}^{k}-1}^{\text{}{d}^{k}-1}\right)}{\left({}_{{p}_{i}{d}^{k}}^{\text{}{d}^{k}}\right)}={p}_{i}$ , and for a random constraint, the probability of such a situation is $\frac{\left(\begin{array}{c}m\\ k\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)}$ .

2) Otherwise, the probability of $〈{t}_{i},{t}_{j}〉$ satisfying the constraint is

$\frac{\left({}_{{p}_{i}{d}^{k}-2}^{\text{}{d}^{k}-2}\right)}{\left({}_{{p}_{i}{d}^{k}}^{\text{}{d}^{k}}\right)}={p}_{i}\frac{{p}_{i}{d}^{k}-1}{{d}^{k}-1}$ , and the probability that $〈{t}_{i},{t}_{j}〉$ falls into such a situation is $1-\frac{\left(\begin{array}{c}m\\ k\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)}$ .

Let

${\sigma }_{m,n}=\frac{\left(\begin{array}{c}m\\ k\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)},\text{}s=\frac{m}{n}.$ (7)

Since

${\sigma }_{m,n}=\frac{\left(\begin{array}{c}m\\ k\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)}=\frac{m\left(m-1\right)\cdots \left(m-k+1\right)}{n\left(n-1\right)\cdots \left(n-k+1\right)}\le {\left(\frac{m}{n}\right)}^{k}$ (8)

we have

${\sigma }_{m,n}\le {s}^{k}.$ (9)

Since the constraints are generated independently, the assignment pair $〈{t}_{i},{t}_{j}〉$ satisfying all the constraints in random instance i is

$\begin{array}{c}P\left(〈{t}_{i},{t}_{j}〉\right)=\underset{i=1}{\overset{l}{\prod }}{\left[{p}_{i}{\sigma }_{m,n}+{p}_{i}\frac{{p}_{i}{d}^{k}-1}{{d}^{k}-1}\left(1-{\sigma }_{m,n}\right)\right]}^{\frac{t}{l}}\\ \le \underset{i=1}{\overset{l}{\prod }}{p}_{i}^{\frac{t}{l}}{\left[{\sigma }_{S,n}+{p}_{i}\left(1-{\sigma }_{S,n}\right)\right]}^{\frac{t}{l}}\\ \le \underset{\text{i=}1}{\overset{l}{\prod }}{p}_{i}^{\frac{t}{l}}{\left[{p}_{i}+\left(1-{p}_{i}\right){s}^{k}\right]}^{\frac{t}{l}}\\ \le \underset{i=1}{\overset{l}{\prod }}{p}_{i}^{\frac{2t}{l}}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{t}{l}}.\end{array}$ (10)

Let ${A}_{m}$ be the set of assignment pairs whose similarity number is m, $|{A}_{m}|$ be the cardinality of ${A}_{m}$ , then we have

$|{A}_{m}|={d}^{n}\left(\begin{array}{c}n\\ m\end{array}\right){\left(d-1\right)}^{n-m}={d}^{2n}\left(\begin{array}{c}n\\ m\end{array}\right){\left(1-\frac{1}{d}\right)}^{n-m}{\left(\frac{1}{d}\right)}^{m}$ . (11)

Thus by (10) and (11), the second order moment of the number of solutions of the random instance of d-p-RB model is

$\begin{array}{c}E\left({N}^{2}\right)=\underset{m=0}{\overset{n}{\sum }}|{A}_{m}|P\left(〈{t}_{i},{t}_{j}〉\right)\\ \le \underset{m=0}{\overset{n}{\sum }}{d}^{2n}\left(\begin{array}{c}n\\ m\end{array}\right){\left(1-\frac{1}{d}\right)}^{n-m}{\left(\frac{1}{d}\right)}^{m}\underset{i=1}{\overset{l}{\prod }}{p}_{i}^{\frac{2t}{l}}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{t}\\ \text{=}{E}^{2}\left(N\right)\underset{m=0}{\overset{n}{\sum }}\left(\begin{array}{c}n\\ m\end{array}\right){\left(1-\frac{1}{d}\right)}^{n-m}{\left(\frac{1}{d}\right)}^{m}\underset{i=1}{\overset{l}{\prod }}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{t}{l}}\\ ={E}^{2}\left(N\right)\underset{0\le s\le 1}{\sum }B\left(s\right)W\left(s\right),\end{array}$ (12)

where

$B\left(s\right)=\left({}_{ns}^{\text{}n}\right){\left(1-\frac{1}{d}\right)}^{n-ns}{\left(\frac{1}{d}\right)}^{ns}$ (13)

$W\left(s\right)=\underset{i=1}{\overset{l}{\prod }}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{t}{l}}$ , (14)

i.e.,

$\frac{E\left({N}^{2}\right)}{{E}^{2}\left(N\right)}\le \underset{0\le s\le 1}{\sum }B\left(s\right)W\left(s\right)$ . (15)

Considering that $0\le s\le 1$ , in order to evaluate the upper bound of the above Inequality (15), we divide the interval [0,1] into three parts: $\left[0,{s}_{1}\right]$ , $\left[{s}_{1},{s}_{2}\right]$ ,

$\left[{s}_{2},1\right]$ , where ${s}_{1}=\frac{1}{{n}^{\beta }}$ , ${s}_{2}=\frac{1}{{n}^{\frac{\gamma +1}{k-1}}}$ , here β and γ satisfy $\frac{\gamma +1}{k-1}<\beta <\mathrm{min}\left\{1,\alpha \right\}$ .

1) For $s\in \left[0,{s}_{1}\right]$ , let ${s}_{1}=\frac{1}{{n}^{\beta }}$ , where $\frac{\gamma +1}{k-1}<\beta <\mathrm{min}\left\{1,\alpha \right\}$ , recalling that

$t=rn\mathrm{ln}d$ , $d\in \left[{n}^{\alpha },{n}^{{n}^{\gamma }}\right]$ , and $p=\mathrm{min}\left\{{p}_{1},{p}_{2},\cdots ,{p}_{l}\right\}$ we have

$\begin{array}{c}W\left(s\right)=\underset{i=1}{\overset{l}{\prod }}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{rn\mathrm{ln}d}{l}}\\ \le {\left(1+\frac{1-p}{p}{s}^{k}\right)}^{rn\mathrm{ln}d}\\ \le \mathrm{exp}\left[rn\mathrm{ln}d\mathrm{ln}\left(1+\frac{1-p}{p}{s}^{k}\right)\right]\\ \le \mathrm{exp}\left(rn\mathrm{ln}d\cdot \frac{1-p}{p}\cdot {s}^{k}\right)\\ \le \mathrm{exp}\left[\frac{r\left(1-p\right)}{p}{n}^{\gamma +1-\beta k}\mathrm{ln}n\right].\end{array}$ (16)

Since $\beta >\frac{\gamma +1}{k-1}>\frac{\gamma +1}{k}$ , $\gamma +1-\beta k<0$ , we get

$\underset{n\to \infty }{\mathrm{lim}}W\left(s\right)=1$ . (17)

Then it is not hard to obtain that

$\underset{0\le s\le {s}_{1}}{\sum }B\left(s\right)W\left(s\right)\le \underset{0\le s\le {s}_{1}}{\sum }B\left(s\right)\le \underset{0\le s\le 1}{\sum }B\left(s\right)=1$ . (18)

2) For $s\in \left[{s}_{1},{s}_{2}\right]$ , let ${s}_{2}=\frac{1}{{n}^{\frac{\gamma +1}{k-1}}}$ , $f\left(s\right)=-s\mathrm{ln}s-\left(1-s\right)\mathrm{ln}\left(1-s\right)$ , by the Stirling’s formula $n!={\left(\frac{n}{\text{e}}\right)}^{n}\sqrt{2\text{π}n}{\text{e}}^{\frac{\epsilon }{12n}}$ , where $|\epsilon |<1$ , it’s not hard to see

$\left(\begin{array}{c}n\\ ns\end{array}\right)<{\text{e}}^{nh\left(s\right)}$ . (19)

Since

$B\left(s\right)W\left(s\right)\le \left({}_{ns}^{\text{}n}\right){\left(1-\frac{1}{d}\right)}^{n-ns}{\left(\frac{1}{d}\right)}^{ns}{\left(1+\frac{1-p}{p}{s}^{k}\right)}^{rn\mathrm{ln}d}$ (20)

we get

$\begin{array}{c}\mathrm{ln}B\left(s\right)W\left(s\right)\le nf\left(s\right)+\left(n-ns\right)\mathrm{ln}\left(1-\frac{1}{d}\right)-ns\mathrm{ln}d+\frac{r}{l}n\mathrm{ln}d\mathrm{ln}\left(1+\frac{1-p}{p}{s}^{k}\right)\\ \le n\left[f\left(s\right)-s\mathrm{ln}d+\frac{r\left(1-p\right)}{lp}\cdot \mathrm{ln}d\cdot {s}^{k}\right]\\ \le ns\left[-\mathrm{ln}s-\frac{1-s}{s}\mathrm{ln}\left(1-s\right)-\alpha \mathrm{ln}n+\frac{r\left(1-p\right)}{lp}\cdot {n}^{\gamma }\cdot \mathrm{ln}n\cdot {s}^{k-1}\right].\end{array}$ (21)

For $s\in \left[{s}_{1},{s}_{2}\right]$ , we have

$\begin{array}{c}\mathrm{ln}B\left(s\right)W\left(s\right)\le ns\left[\beta \mathrm{ln}n+1-\alpha \mathrm{ln}n+\frac{r\left(1-p\right)}{lp}\cdot {n}^{\gamma }\cdot \mathrm{ln}n\cdot {\left(\frac{1}{{n}^{\frac{\gamma +1}{k-1}}}\right)}^{k-1}\right]\\ \text{=}ns\mathrm{ln}n\left[\beta -\alpha +\frac{1}{\mathrm{ln}n}+\frac{r\left(1-p\right)}{lp}\cdot \frac{1}{n}\right]\\ \le {n}^{1-\beta }\mathrm{ln}n\left[\beta -\alpha +o\left(\frac{1}{\mathrm{ln}n}\right)\right].\end{array}$ (22)

Since $\beta <\mathrm{min}\left\{1,\alpha \right\}$ , we have

$\underset{n\to \infty }{\mathrm{lim}}\mathrm{ln}B\left(s\right)W\left(s\right)=-\infty$ , (23)

hence

$\begin{array}{c}\underset{{s}_{1}\le s\le {s}_{2}}{\sum }B\left(s\right)W\left(s\right)\le nB\left(s\right)W\left(s\right)\\ \le n\mathrm{exp}\left[{n}^{1-\beta }\mathrm{ln}n\left(\beta -\alpha +o\left(\frac{1}{\mathrm{ln}n}\right)\right)\right]\\ ={n}^{{n}^{1-\beta }\left(\beta -\alpha +o\left(\frac{1}{\mathrm{ln}n}\right)\right)+1}\to 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(n\to \infty \right).\end{array}$ (24)

Thus, for arbitrary small ε, there exists an integer ${N}_{1}>0$ , such that

$\underset{{s}_{1}\le s\le {s}_{2}}{\sum }B\left(s\right)W\left(s\right)<\frac{\epsilon }{2}$ . (25)

3) For $s\in \left[{s}_{2},1\right]$ we have

$B\left(s\right)W\left(s\right)=\left({}_{ns}^{\text{}n}\right){\left(1-\frac{1}{d}\right)}^{n-ns}{\left(\frac{1}{d}\right)}^{ns}\underset{i=1}{\overset{l}{\prod }}{\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)}^{\frac{rn\mathrm{ln}d}{l}}$ (26)

Then

$\begin{array}{c}\mathrm{ln}B\left(s\right)W\left(s\right)\le nf\left(s\right)-ns\mathrm{ln}d+\frac{r}{l}n\mathrm{ln}d\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)\\ =n\left[f\left(s\right)+\mathrm{ln}d\left(-s+\frac{r}{l}\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)\right)\right].\end{array}$ (27)

Let $g\left(s\right)=\frac{r}{l}\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}\left(1+\frac{1-{p}_{i}}{{p}_{i}}{s}^{k}\right)-s$ , differentiating $g\left(s\right)$ with respect to s, we get

${g}^{\prime }\left(s\right)=\frac{r}{l}\underset{i=1}{\overset{l}{\sum }}\frac{k\left(1-{p}_{i}\right){s}^{k-1}}{{p}_{i}+\left(1-{p}_{i}\right){s}^{k}}-1$ , (28)

and then

${g}^{″}\left(s\right)=\frac{r}{l}\underset{i=1}{\overset{l}{\sum }}\frac{k\left(1-{p}_{i}\right){s}^{k-2}\left[\left(k-1\right){p}_{i}+\left(1-{p}_{i}\right){s}^{k}\right]}{{\left[{p}_{i}+\left(1-{p}_{i}\right){s}^{k}\right]}^{2}}$ . (29)

By the condition $k\ge \frac{1}{p}$ , we have ${g}^{″}\left(s\right)\ge 0$ , which implies that $g\left(s\right)$ is convex for $s\in \left[0,1\right]$ . Note that $g\left(0\right)=0$ and $g\left(1\right)=-\frac{r}{l}\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}{p}_{i}-1<0$ for $r<{r}_{m}=-l/\underset{i=1}{\overset{l}{\sum }}\mathrm{ln}{p}_{i}$ , therefore we get $g\left(s\right)<0$ for $s\in \left[{s}_{2},1\right]$ . Let ${\mathrm{max}}_{s\in \left[{s}_{2},1\right]}g\left(s\right)=-M$ , where $M>0$ is a constant.

For $f\left(s\right)=-s\mathrm{ln}s-\left(1-s\right)\mathrm{ln}\left(1-s\right)$ , similarly we have

${f}^{\prime }\left(s\right)=-\mathrm{ln}s+\mathrm{ln}\left(1-s\right)$ (30)

${f}^{″}\left(s\right)=-\frac{1}{s\left(1-s\right)}<0$ . (31)

So $f\left(s\right)$ is concave and has the maximum value $\mathrm{ln}2$ at $s=\frac{1}{2}$ . Thus we have

$\mathrm{ln}B\left(s\right)W\left(s\right)\le n\left[f\left(s\right)+\mathrm{ln}d\cdot g\left(s\right)\right]\le n\left(\mathrm{ln}2-M\mathrm{ln}d\right).$ (32)

So we get

$\underset{{s}_{2}\le s\le 1}{\sum }B\left(s\right)W\left(s\right)\le nB\left(s\right)W\left(s\right)\le n\mathrm{exp}\left(n\mathrm{ln}2-M\mathrm{ln}d\right)\le {2}^{n}{n}^{-\alpha Mn+1}$ , (33)

hence

$\underset{n\to \infty }{\mathrm{lim}}\underset{{s}_{2}\le s\le 1}{\sum }B\left(s\right)W\left(s\right)=0$ , (34)

i.e., for $s\in \left[{s}_{2},1\right]$ , there exists an integer ${N}_{2}>0$ , such that

$\underset{{s}_{2}\le s\le 1}{\sum }B\left(s\right)W\left(s\right)<\frac{\epsilon }{2}$ . (35)

Summarizing the above, from (18), (25), (35), letting $N=\mathrm{max}\left\{{N}_{1},{N}_{2}\right\}$ , we obtain

$\underset{0\le s\le 1}{\sum }B\left(s\right)W\left(s\right)<1+\epsilon$ . (36)

Thus we have

$\frac{E\left({N}^{2}\right)}{{E}^{2}\left(N\right)}\le 1\text{+}\epsilon$ , (37)

then by the Cauchy inequality $\mathrm{Pr}\left(sat\right)\ge \frac{{E}^{2}\left(N\right)}{E\left({N}^{2}\right)}$ , we have

$\frac{1}{1+\epsilon }\le \mathrm{Pr}\left(sat\right)\le 1$ , (38)

then we get

$\underset{n\to \infty }{\mathrm{lim}}\mathrm{Pr}\left(sat\right)=1$ . （39）

Thus the theorem is proved.

So far we have demonstrated the satisfiability phase transition in theory. From the proof of the theorem, it can be seen that when the control parameter r is less than the transition point ${r}_{m}$ , the probability of a CSP instance being satisfied tends to 1, while the control parameter r is greater than the transition point ${r}_{m}$ , the probability tends to 0. Thus there exists a sharp threshold in the CSP instances generated by d-p-RB model.

5. Conclusion

In this paper, we propose a new CSP model d-p-RB. Compared with RB model, we diversify the constraint tightness p and broaden the domain size d. By the method of second moment, we proved that there indeed exist satisfiability phase transition phenomenon and the transition point can also be located exactly.

Cite this paper
Liu, Y. (2017) Sharp Thresholds for a Random Constraint Satisfaction Problem. Open Journal of Applied Sciences, 7, 574-584. doi: 10.4236/ojapps.2017.710041.
References
   Cheesema, P., Kanefsky, B. and Taylor, W.M. (1991) Where the Really Hard Problems Are. Proceedings of the 12th International Joint Conference on Artificial Intelligence, Sydney, 24-30 August 1991, 331-337.

   Prosser, P. (1996) An Empirical Study of Phase Transitions in Binary Constraint Satis-faction Problems. Artificial Intelligence, 81, 81-109.
https://doi.org/10.1016/0004-3702(95)00048-8

   Friedgut, E. and Bourgain, J. (1999) Sharp Thresholds of Graph Properties, and the k-SAT Problem. Journal of the American Mathematical Society, 12, 1017-1054.
https://doi.org/10.1090/S0894-0347-99-00305-7

   Smith, B.M. (2001) Constructing an Asymptotic Phase Transition in Random Binary Constraint Satisfaction Problems. Theoretical Computer Science, 265, 265-283.
https://doi.org/10.1016/S0304-3975(01)00166-9

   Friedgut, E. (2005) Hunting for Sharp Thresholds. Random Structures and Algorithms, 26, 37-51. https://doi.org/10.1002/rsa.20042

   Frieze, A.M. and Molloy, M. (2006) The Satisfiability Threshold for Randomly Generated Binary Constraint Satisfaction Problems. Random Structures and Algorithms, 28, 323-339.
https://doi.org/10.1002/rsa.20118

   Smith, B.M. and Dyer, M.E. (1996) Locating the Phase Transition in Binary Constraint Satisfaction Problems. Artificial Intelligence, 81, 155-181.
https://doi.org/10.1016/0004-3702(95)00052-6

   Gent, I., Macintyre, E., Prosser, P. and Smith, B. (2001) Random Constraint Satisfaction: Flaws and Structure. Constraints, 6, 345-372.
https://doi.org/10.1023/A:1011454308633

   Achlioptas, D., Kirousis, L., Kranakis, E., Krizanc, D., Molloy, M. and Stamatiou, Y. (1997) Random Constraint Satisfaction: A More Accurate Picture. Proceedings of the Third International Conference on Principles and Practice of Constraint Programming, Austria, 29 October-1 November 1997, 107-120.
https://doi.org/10.1007/BFb0017433

   Molloy, M. (2003) Models for Random Constraint Satisfaction Problems. SIAM Journal of Computing, 32, 935-949.
https://doi.org/10.1137/S0097539700368667

   Yong, G. and Joseph, C. (2004) Consistency and Random Constraint Satisfaction Models with a High Constraint Tightness. Proceedings of the 10th International Conference on Principles and Practice of Constraint Programming, 17-31.
https://doi.org/10.1007/978-3-540-30201-8_5

   Yong, G. and Joseph, C. (2007) Consistency and Random Constraint Satisfaction Models. Journal of Artificial Intelligence Research, 28, 517-557.

   Achlioptas, D., Kirousis, L., Kranakis, E., Krizanc, D., Molloy, M. and Stamatiou, Y. (2001) Random Constraint Satisfaction: A More Accurate Picture. Constraints, 6, 329-344.
https://doi.org/10.1023/A:1011402324562

   Xu, K. and Li, W. (2000) Exact Phase Transitions in Random Constraint Satisfaction Problems. Journal of Artificial Intelligence Research, 12, 93-103.

   Xu, K. and Li, W. (2006) Many Hard Examples in Exact Phase Transitions. Theoretical Computer Science, 355, 291-302.

   Xu, K., Boussemart, F., Hemery, F. and Lecoutre, C. (2007) Random Constraint Satisfac-tion: Easy Generation of Hard Satisfiable Instances. Artificial Intelligence, 171, 514-534.

   Zhao, C. and Zheng, Z. (2011) Threshold Behaviors of a Random Constraint Satisfaction Problem with Exact Phase Transition. Information Processing Letters, 111, 985-988.

   Frieze, A. and Wormald, N.C. (2005) Random k-sat: A Tight Threshold for Moderately Growing k. Combinatorica, 25, 297-305.
https://doi.org/10.1007/s00493-005-0017-3

   Fan, Y. and Shen, J. (2011) On the Phase Transitions of Random k-Constraint Satisfaction Problems. Artificial Intelligence, 175, 914-927.

   Fan, Y., Shen, J. and Xu, K. (2012) A General Model and Thresholds for Random Constraint Satisfaction Problems. Artificial Intelligence, 193, 1-17.

   Zhao, C., Zhou, H., Zheng, Z. and Xu, K. (2011) A Message-Passing Approach to Random Constraint Satisfaction Problems with Growing Domains. Journal of Statistical Me-chanics: Theory and Experiment, 2011, 1742-5468.
https://doi.org/10.1088/1742-5468/2011/02/P02019

   Zhao, C. and Zheng, Z. (2012) A Belief-Propagation Algorithm Based on Variable Entropy for Constraint Satisfaction Problems. Chinese Science: Information Science, 42, 1170-1180.

   Zhao, C., Zhang, P., Zheng, Z. and Xu, K. (2012) Analytical and Belief-Propagation Studies of Random Constraint Satisfaction Problems with Growing Domains. Physical Review E Statistical Nonlinear & Soft Matter Physics, 85, Article ID: 016106.
https://doi.org/10.1103/PhysRevE.85.016106

Top