Predicting Compressive Strength of Recycled Concrete for Construction 3D Printing Based on Statistical Analysis of Various Neural Networks

Show more

1. Introduction

1.1. Construction 3D Printing

3D printing is the method of converting virtual 3D model from digital file to physical object, which is achieved through the use of additive process that objects are created by laying down successive layers of materials until an object is created. Advances in 3D printing technology can significantly change and improve many fields such as energy use, waste reduction, product availability, medicine, art, and construction [1] . Construction 3D printing is an automated process that has grown rapidly in recent years, which brings huge benefits to the construction industry for the reason of increasing customization, shorter construction time, and lower construction costs [2] . The recent 3D printing technology developed for construction industry is contour crafting, an additive manufacturing technology that uses computer control to create accurate planar and freeform surfaces [3] . Another process called D-shape is also developed by pulling the adhesive on the material layer. D-shape is a gantry based powder-bed printer that can print 6 × 6 × 6 meters of building structure [4] .

The printing materials play a very important role in 3D printing and should have basic features such as rapid hardening. Many studies have shown that the strength and stability of current printing materials are poor, which hinders the application of 3D printing in large-scale models or building construction. Sungwoo Lim described a concrete printing process with a practical example in designing and manufacturing a concrete component called Wonder Bench [5] . Khoshnevis found that printing materials which have low availability of high-strength which might not be used in large-scale models or buildings construction [6] . T. T. Le presented the experimental results concerning the mix design and properties of high-performance fiber reinforced fine-aggregate concrete for printing, which has been designed to be extruded through a nozzle to build layer-by-layer structural components [7] . Sungwoo Lim investigated cement and gypsum based materials for construction 3D printing. The mix produced high strength material, which is more than three times as strong in compression and flexure as conventional construction materials.

Yi Wei Tay pointed that recycled materials as well as industrial waste such as fly-ash and slag should be to used in construction 3D printing. Also, functional materials such as fiber reinforced concrete (FRC) and highly ductile engineered cementitious composite (ECC) may be beneficial to 3D concrete printing, since they have high tensile and flexural strength than conventional concrete and these properties are very crucial for stability of concrete structure [8] . Dirk Volkmer presented a 3D printing composite of Portland cement paste and reinforcing short fibers (carbon, glass and basalt fibers), resulting in novel materials that exhibit high flexural (up to 30 MPa) and compressive strength (up to 80 MPa) [9] . Performance-based laboratory testing of cementitious mixtures for construction-scale 3D printing, experimental study of four different mixtures revealed that inclusion of silica fume and nano-clay significantly enhance shape stability [10] . An innovative methodology is presented by Jay Sanjayan to formulat geopolymer-based material for the requirements and demands of commercially available powder-based 3D printers. Results indicated the prepared geopolymer-based material achieved sufficient printability to be used in a powder-based 3D printer [11] .

1.2. Recycled Aggregate Concrete

In order to protect environment, it is necessary to recycle solid waste. At the same time, the increase of costs and reduction of natural resources make humans turn to make use of waste. The use of recycled concrete aggregates from construction waste as substitute for non-renewable natural aggregates in concrete has been considered to increase the sustainable use of resources in construction industry and reduce the impact on the environment. However, since there are changes in reproduction, the mechanical properties obtained by using recycled aggregate concrete (RAC) differ from the natural aggregate concrete (NAC).

Recycled aggregates are substantially different in composition and properties compared with natural aggregates, causing it hard to predict the performance of recycled aggregate concrete and design optimal mix proportions. Artificial neural network (ANN) has good potential to be used as a tool for predicting the compressive strength of recycled aggregate concrete [12] . Analysis was conducted to study the practicability of using the ANN theoretical model as prediction tool. The results showed that one hidden layer back propagation algorithm could produce reasonable outputs that coincide with the experimental results after proper training [13] . Gholamreza Abdollahzadeh proposed 20 models for predicting compressive strength of recycled aggregate concrete (RAC) containing silica fume by using gene expression programming (GEP) [14] . İlker Bekir Topçu studied the models in artificial neural networks and fuzzy logic systems for predicting, the results show that the models have strong potential for predicting 3, 7, 14, 28, 56 and 90 days compressive and splitting tensile strengths values of recycled aggregate concretes containing silica fume [15] .

Togay Ozbakkaloglu presented new empirical models for prediction of the mechanical properties of recycled aggregate concrete (RAC) using gene expression programming (GEP) technique. The assessment results indicate that the predictions of proposed models are close with the test results and the new models provide improved estimates of the mechanical properties of RACs compared to existing models [16] . Tanja Kalman Šipoš proposed an optimized quantitative model for proportioning concrete mixtures based on cement content, water-cement ratio and percentage of recycled aggregate replacement according to preferred recycled brick aggregate concrete (RBAC) compressive strength [17] . Gwang-Hee Kim optimized the mixing proportion of recycled aggregate concrete (RAC) using neural networks (NNs) based on genetic algorithms (GAs) for increasing the use of recycled aggregate (RA) [18] . Yoonseok Shin proposed a multiple regression model (MRM) for predicting the compressive strength of recycled aggregate concrete (RAC) and the results show that it is useful for predicting the compressive strength of RAC [19] .

1.3. The Use of Recycled Aggregate Concrete in Construction 3D Printing

Recycled aggregate concrete and construction 3D printing are promising fields of research in the future. It is of great significance to apply them to green and sustainable construction by using the recycled aggregate concrete as 3D printing material. In this way, recycled aggregate concrete can be used, which not only disposes with construction waste, but also saves costs. In addition, using construction 3D printing technology has improved efficiency and shortened construction time. However, the performance of recycled concrete is not easy to determine, and will be affected by many other factors, preventing from applying in construction 3D printing. Therefore, this paper proposes a method by using artificial neural networks to predict compressive strength of recycled concrete. By comparing the characteristics of different types neural networks, the influence of various parameters on their performance and the distribution of evaluation indexes are analyzed, and finally an optimal neural network with best predicting performance is selected.

2. Methodology

The neural networks have unique advantages for solving multi-parameters, nonlinear problems. Therefore, it is a good reference for the prediction performance of recycled concrete, and then guides the design of construction 3D printing materials. However, due to the variety of neural networks and the great differences between them, it is very important to choose appropriate neural network and determine the suitable parameters so as to achieve the most accurate prediction. From the numerical experiments in this paper, we can see that different neural networks are very different and some of them are extremely sensitive to the value of parameters. This article selects the data about the compressive strength of recycled concrete from several papers and tries to find the best neural network to predict the compressive strength. At first, this paper briefly introduces four kinds of neural networks, then uses numerical experiments to determine the optimal performance of each network, and finally chooses the best-performing neural networks. There are many factors needed to be considered such as the value of relevant error, the statistical distribution of the error, the accuracy of the prediction. After above analysis, the prediction performance of the recycled 3D printing concrete based on neural network method can be determined.

In general, the main factors affecting the performance of neural network are the number of neurons and layers in the hidden layer. Therefore, in same training function, the influence of neurons and layers in the hidden layer on the training effect is mainly investigated. Firstly, set up multiple neural networks with different parameters. The number of hidden layers is 1 to n and the number of hidden layer neurons is 1 to 100. Secondly, run these programs and obtain evaluation indicators of predicting performance such as root mean squared error (RMSE) and coefficient of determination (R2) which are presented in this paper as statistical evaluations for errors while training and testing the models. Finally, the RMSE and R2 values of each group are statistically analyzed and their histograms and probability distribution function curves were presented. The distribution of these data was shown in the figure. Taking into account the errors and accidental factors in the calculation process, the algorithm is set to run three times and then averaged.

$RMSE=\sqrt{\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({t}_{i}-{o}_{i}\right)}^{2}}}$ (1)

${R}^{2}=\frac{{\left(n{\displaystyle \sum {t}_{i}{o}_{i}}-{\displaystyle \sum {t}_{i}{\displaystyle \sum {o}_{i}}}\right)}^{2}}{\left(n{{\displaystyle \sum {t}_{i}^{2}-\left({\displaystyle \sum {t}_{i}}\right)}}^{2}\right)\left(n{{\displaystyle \sum {o}_{i}^{2}-\left({\displaystyle \sum {o}_{i}}\right)}}^{2}\right)}$ (2)

Here t is the target value, o is the output value and n is the number of all collected data. Root mean square error (RMSE) is the root mean square value of both predicted value and actual value, which is used to evaluate predicting performance. This quantity is especially useful when variables are positive and negative. If RMSE values increase, the performance of models will reduce. Coefficient of determination (R2) shows the fitness level of defined function on data set. If the values of R2 are above 0.7 and closer to 1, this shows that predicted results are closer to experimental results.

3. BP Neural Network

3.1. Fundamental Principle of BP Neural Network

Error back propagation (BP) is one of the most commonly used methods and is a supervised learning neural network. The principle of it involves using the steepest gradient descent method to achieve approximation. There are three layers in BP: input layer, hidden layer, and output layer. The two nodes in each adjacent layer are called links and are directly connected. Each link has a weight that indicates the degree of relationship between the two nodes. Choosing the input I_{i} of the input layer neuron i, the weight W_{ji} from the input layer neuron i to the hidden layer neuron j, and the threshold K_{j} of the hidden layer neuron j, the output H_{j} of the hidden layer neuron j is calculated by

${H}_{j}=f\left({\displaystyle \underset{i}{\sum}{w}_{ji}\cdot {I}_{i}+{K}_{j}}\right)$ (3)

$f\left(x\right)=\frac{1}{1+\mathrm{exp}\left(-x\right)}$ (4)

where f usually denotes sigmoid function.

3.2. The Influence of Different Factors on BP Neural Network Prediction Performance

Impact of neuron numbers on predictive performance. From Figure 1 and Figure 2, we can know the influence of neurons in hidden layer on the prediction performance. The values of RMSE and R2 fluctuate greatly; there are big differences between adjacent values. This shows that the performance of BP neural network is sensitive to layer number and it also shows that BP neural network algorithm is unstable, which is easily disturbed by external factors.

Impact of hidden layers on predictive performance. The operating time of two layers neural network is much longer than single layer neural network. As the number of hidden layers increases, the running time becomes longer but RMSE does not decrease. While the value of R2 increases slightly, but the effect is not obvious. The characteristics of multiple layers neural network are similar to those of single layer neural network.

Impact of cascading on predictive performance. Cascading refers to the connections between different layers. There are not only connections between adjacent layers but also connection between output layers and hidden layers. Whether the BP neural network adopts cascaded form has little difference on RMSE and R2. Considering the cascade will increase operation time and there is no direct relationship between multi-layers and multi-nodes on prediction accuracy. In summary, this paper gives priority to use single-layer non-cascaded BP neural network.

Figure 1. RMSE of BP neural network.

Figure 2. R2 of BP neural network.

3.3. Distribution of BP Neural Network Predictive Performance Index

After calculating BP neural network with different parameters, multiple sets of RMSE and R2 data can be obtained. Analyzing the relationships and distribution rules of these data not only help to select the best performance neural network, but also discover the rules in these data and improve the algorithm.

The distribution of RMSE. From the Figure 3, the RMSE values are mainly concentrated in the interval of [10, 20]. The data distribution is relatively concentrated, it can be seen from the distribution function that the slope of the first half distribution is relatively large. It shows that the data is mainly concentrated in this area, the smaller slope of the latter half indicates that less data is available for this interval.

The distribution of R2. The Figure 4 shows that the value of R2 is mainly concentrated in the interval of [0.5, 0.7]. The data distribution is relatively concentrated. It can be known from the distribution function that the smaller the slope of the first half is, the less the distribution data of this interval is. The slope of the latter half is large, we can see that the data is mainly concentrated in this area.

Figure 3. RMSE distribution of BP neural network.

Figure 4. R2 distribution of BP neural network.

4. Elman Neural Network

4.1. Fundamental Principle of Elman Neural Network

Elman neural networks can be divided into four layers: input layer, hidden layer, connection layer and output layer. The connection layer is used to remember the output of the previous moment in hidden layer and can be regarded as a one-step time delay operator. Based on basic structure of BP neural network, the output of hidden layer is connected with the output of hidden layer through its own delay and storage, making it sensitive to data of historical conditions. The feedback network has improve the ability to handle dynamic information and the storage of the internal state has the function of mapping dynamics, so that the system can adapt to the time-varying conditions.

Suppose there are n inputs, m outputs and r neurons in hidden layer and connecting layer, the weight from input layer to hidden layer is w_{1} while the weight between connecting layer and hidden layer is w_{2} and the weight from hidden layer to output layer is w_{3}, u(k − 1) represents the inputs of neural network, x(k) represents the outputs of the hidden layerm x_{c}(k) represents the outputs of the connecting layer, and y(k) represents the outputs of neural network. Then

$x\left(k\right)=f\left({w}_{2}{x}_{c}\left(k\right)+{w}_{1}\left(u\left(k-1\right)\right)\right)$ (5)

${x}_{c}\left(k\right)=x\left(k-1\right)$ (6)

$y\left(k\right)=g\left({w}_{3}x\left(k\right)\right)$ (7)

In which f represents the transfer function of hidden layer and sigmoid function is commonly used and can be defined as

$f\left(x\right)=\frac{1}{1+\mathrm{exp}\left(-x\right)}$ (8)

g is the transfer function of output layer and it is usually a linear function. Elman network uses BP algorithm to calculate the weight values and the error of the network is

$E={\displaystyle \underset{k=1}{\overset{m}{\sum}}{\left({t}_{k}-{y}_{k}\right)}^{2}}$ (9)

4.2. Influence of Different Factors on Elman Neural Network Prediction Performance

Impact of neuron numbers on predictive performance. From Figure 5, when the number of hidden layers is 1, in the range of [0, 10], the value of RMSE decreases sharply with the increase of neurons, indicating that the error is continuously decreasing and the predicted value and actual value are continuously approached. When the number of elements is 10, the minimum value is reached, and the best prediction effect can be achieved at this time. In the interval of [11, 100], with the increase of the number of neurons, the value of RMSE increases, which means that the error increases continuously and the deviation between the predicted value and the true value increases. From Figure 6, when the number of hidden layers is 1, in the range of [0, 10], the value of R2 rises sharply with the increase of neurons, indicating that the coefficient of determination(R2) increases at this time, and the predicted value and the actual value continuously approach each other. When the number of neurons is 8, the peak value is reached, and the best prediction effect can be achieved at this time. In the interval [11, 100], as the number of neurons increases, the value of R2 decreases, indicating that the error is increasing at this time, and the deviation between the predicted value and the true value is increasing.

Impact of hidden layers on predictive performance. The operating time of the two layers neural network is much longer than single layer neural network. The more layers are, the longer the running time is. As the number of hidden layers of neurons increases, the running time becomes longer and the values of RMSE and R2 do not change significantly. The characteristics of multiple layers neural network are similar to those of the single-layer neural network. Increasing the number of hidden layer neurons and the number of layers will increase the computation time, and at the same time, this will not improve the accuracy of predicting performance greatly, in some cases it may even result in bad impact. In summary, this paper gives priority to use single-layer Elman neural network.

Figure 5. RMSE of Elman neural network.

Figure 6. R2 of Elman neural network.

4.3. Distribution of Elman Neural Network Predictive Performance Index

After calculating Elman neural network with different parameters, multiple sets of RMSE and R2 data can be obtained. Analyzing the relationships and distribution rules of these data not only help to select the best performance neural network, but also discover the rules in these data and improve the algorithm.

The distribution of RMSE. From Figure 7, it can be seen that the RMSE value is mainly concentrated in the range of [6, 9], especially in [8, 9]. The curve is very steep and is almost a straight line in the range of [9, 10], indicating that the RMSE is mainly distributed within [6, 9].

The distribution of R2. From Figure 8, we can see that the value of R2 is mainly concentrated in the range of [0.55, 0.85], especially in the range of [0.6, 0.8]. The curve is very steep, and is almost a straight line in the range of [0.85, 0.95], indicating that the value of R2 is mainly distributed within [0.55, 0.85].

5. Generalize Regression Neural Network

5.1. Fundamental Principle of GRNN Neural Network

Generalize regression neural network (GRNN) is a type of neural network which

Figure 7. RMSE distribution of Elman neural network.

Figure 8. R2 distribution of Elman neural network.

is widely used for continuous function mapping. The main function of GRNN is to estimate the linear or nonlinear regression of variables. In other words, the network only gives the training vector x and calculates the most likely value of the output y. Specifically, the network calculates the joint probability density function (PDF) of x and y. Then the expected value of output y for given input vector x is calculated by

$E\left[y|x\right]=\frac{{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}yf\left(x,y\right)\text{d}y}}{{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}f\left(x,y\right)\text{d}y}}$ (10)

One of the important advantages of GRNN is very simple and quick training procedure. Another attractive feature is that, unlike BP neural network, GRNN does not converge to a local minimum. In addition, the training process of GRNN algorithm is more effective than BP neural network. The input layer is fully connected to the pattern layer, with one neuron for each pattern. It calculates the pattern function in the expression

${h}_{i}=\mathrm{exp}\left(\frac{-{D}_{i}^{2}}{2{\sigma}^{2}}\right)$ (11)

${D}_{i}^{2}={\left(x-{u}_{i}\right)}^{\text{T}}\left(x-{u}_{i}\right)$ (12)

where σ denotes the smoothing parameter, x is input of network and u_{i} is training vector. The summation layer has two units, N and D. The first unit calculates the sum weight of output in hidden layer. The weight of the second unit is equal to 1 and the sum of the individual index items is h_{i}. Finally, the output unit divides N by D to calculate the prediction result.

5.2. Effect of Spread Rate on GRNN Neural Network Prediction Performance

For probabilistic neural network, we mainly study the influence of spread speed on prediction performance. The spread value range is (0, 1]. The initial value of spread rate is set to 0.01 and the increment is 0.01, which gradually increases to 1. Then get 100 increments of substeps, plot it in a graph and examine the effect of incremental substeps on RMSE and R2. From Figure 9, the RMSE value increases as the number of incrementing substeps increases. The rate of curve increase rapidly at first half interval and then slowly at the latter half interval. The curve indicates that the root mean square error increases continuously, and the difference between the predicted value and the actual value becomes larger. From Figure 10, the value of R2 decreases with the increase of neurons. The rate of decrease in the first half is faster, and then the rate of decrease becomes flat, indicating that the coefficient of determination continues to decrease, and the difference between the predicted value and the actual value becomes larger.

5.3. Distribution of GRNN Neural Network Predictive Performance Index

After calculating GRNN neural networks with different parameters, multiple sets of RMSE and R2 data can be obtained. Analyzing the relationships and distribution rules of these data not only help to select the best performance neural network, but also discover the rules in these data and improve the algorithm.

The distribution of RMSE. According to Figure 11, the RMSE values are mainly distributed in the range of [8, 12]. From the distribution function, the first half of the curve is relatively flat, indicating that the data is less distributed in this interval. The latter part of the curve is steep, indicating that the data is mainly concentrated in the latter part.

The distribution of R2. According to Figure 12, the value of R2 is generally evenly distributed in the interval [0, 1], but there is only a significant change. From the distribution function image, the curve is relatively flat, but there are irregularities in some positions.

Figure 9. RMSE of GRNN neural network.

Figure 10. R2 of GRNN neural network.

Figure 11. RMSE distribution of GRNN neural network.

Figure 12. R2 distribution of GRNN neural network.

6. RBF Neural Network

6.1. Fundamental Principle of RBF Neural Network

The RBF neural network generally consists of three layers: input layer, hidden layer and output layer. The input layer feeds input data to each node of hidden layer. The node of hidden layer is very different from other neural networks because each node represents data cluster centered on a specific point of given radius. Each node in hidden layer calculates the distance from input vector to its own center. The calculated distance is transformed by basis functions and the result is the output from the node. The output of node is multiplied by a constant value and fed to the output layer. The output layer contains only one node that sums the output of previous layers and produces the final output value.

The calculation of RBF neural network follows below process. When the network receives the k dimensional input vector X, the network uses the following formula to calculate scalar value

$Y=f\left(X\right)={w}_{0}+{\displaystyle \underset{i=1}{\overset{m}{\sum}}{w}_{i}r\left({D}_{i}\right)}$ (13)

where w_{0} is the bias, w_{i} is the weight parameter, m is the number of nodes in the hidden layers of RBF neural network. In this paper, the Gaussian function r(D_{i}) is used as RBF, as shown below

$r\left({D}_{i}\right)=\mathrm{exp}\left(-{D}_{i}^{2}/{\sigma}^{2}\right)$ (14)

where σ is the radius of cluster represented by center node, D_{i} represents the distance between input vector X and all the data centers. It is clear that r(D_{i}) will return values between 0 and 1. Usually, the Euclidean norm is used to calculate distance, but other methods can also be used. The Euclidean norm is calculated by

${D}_{i}=\sqrt{{\displaystyle \underset{j=1}{\overset{k}{\sum}}{\left({x}_{j}-{c}_{ji}\right)}^{2}}}$ (15)

where c is a cluster center for any of the given nodes in the hidden layer.

Complex nonlinear systems such as foreign exchange rate data are often difficult to model by using linear regression methods. Unlike regression, neural networks are nonlinear, and their parameters are determined by several learning techniques and search algorithms (such as error back propagation and steep gradient algorithms). The main drawback of BP neural network is that the learning process is slow and time consuming. In addition, they often get stuck at local minimum value. However, RBF neural network overcomes the above problems and has good performance because these parameters to be trained are in hidden layer of the network. Determining these values is a solution to linear problem and is obtained by interpolation. Therefore, these parameters were found to be much faster than BP neural network. In addition, RBF neural networks can often implement training data sets with near-perfect precision without trapping in local minimum.

6.2. Influence of Different Factors on RBF Neural Network Prediction Performance

Impact of neuron numbers on predictive performance. For RBF neural network, from Figure 13, when the number of neurons increases, the value of RMSE decreases significantly, which means that predicted value and actual value continuously approach each other. With the number of neurons increases, the downward trend of RMSE is obvious. When the number of neurons reaches the number of samples, the RMSE tends to 0. As the number of neurons increases, the steady state is maintained. From Figure 14, when the number of neurons increases, the value of R2 increases continuously and the slope decreases from large to small, gradually approaching zero. When the number of neurons reaches the number of samples, the determination coefficient tends to 1 and remains stable.

Impact of hidden layers on predictive performance. The operating time of single layer neural network is much shorter than multiple layers neural network. The more layers, the longer the running time is. As the number of neurons in hidden layers increases, the running time becomes longer and the values of

Figure 13. RMSE of RBF neural network.

Figure 14. R2 of RBF neural network.

RMSE and R2 do not change significantly. The characteristics of multiple layers neural network are similar to those of single-layer neural network. Increasing the number of hidden layer neurons and the number of layers will increase the computation time, and at the same time, this will not improve the accuracy of predicting performance greatly, in some cases it may even result in bad impact. In summary, this paper gives priority to use single-layer RBF neural network.

6.3. Distribution of RBF Neural Network Predictive Performance Index

After calculating RBF neural networks with different parameters, multiple sets of RMSE and R2 data can be obtained. Analyzing the relationships and distribution rules of these data not only help to select the best performance neural network, but also discover the rules in these data and improve the algorithm.

The distribution of RMSE. From Figure 15, it can be seen that RMSE are mostly concentrated in the interval of [0, 8], with less distribution in the rest part. As can be seen from the distribution function, the curve in [0, 1] is extremely steep, almost perpendicular to the axis. Then the curve is more flat and it shows that the error of prediction for RBF neural network is stable.

The distribution of R2. From Figure 16, it can be seen that most of R2 are concentrated in the interval of [0.7, 1], with less distribution in the rest part. As can be seen from the distribution function, the curve in [0.95, 1] is extremely steep, almost perpendicular to the axis. It shows that the distribution of prediction values for RBF neural network is concentrated and the prediction performance is very good.

7. Comparison of Various Neural Networks

Comparing the RMSE values of various neural networks, it can be found from Table 1 that the mean value of RBF neural network is the smallest, and its median is also almost zero, although the standard deviation is slightly larger than other neural networks. Therefore, the prediction performance of RBF neural network is superior to other neural networks. Comparing the R2 values of various types of neural network, it can be found from Table 2, on average, the mean value of RBF neural network is close to 1 and its median is 1, although the standard deviation is slightly larger than that of other neural networks. Therefore, the prediction performance of RBF neural network is superior to other neural networks. In conclusion, by considering both R2 and RMSE, RBF neural network is better to predict the compressive strength of recycled 3D printing concrete.

Table 1. RMSE of various neural networks.

Table 2. R2 of various neural networks.

Figure 15. RMSE distribution of RBF neural network.

Figure 16. R2 distribution of RBF neural network.

8. Conclusion

References

[1] Bogue, R. (2013) 3D Printing: The Dawn of a New Era in Manufacturing? Assembly Automation, 33, 307-311.

https://doi.org/10.1108/AA-06-2013-055

[2] Wu, P., Wang, J. and Wang, X. (2016) A Critical Review of the Use of 3D Printing in the Construction Industry. Automation in Construction, 68, 21-31.

https://doi.org/10.1016/j.autcon.2016.04.005

[3] Khoshnevis, B. (2004) Automated Construction by Contour Crafting—Related Robotics and Information Technologies. Automation in Construction, 13, 5-19.

https://doi.org/10.1016/j.autcon.2003.08.012

[4] Kreiger, M.A., MacAllister, B.A., Wilhoit, J.M. and Case, M.P. (2015) The Current State of 3D Printing for Use in Construction. The Proceedings of the 2015 Conference on Autonomous and Robotic Construction of Infrastructure, Ames, 2-3 June 2015, 149-158.

[5] Lim, S., Buswell, R., Le, T., Wackrow, R., Austin, S., Gibb, A., et al. (2011) Development of a Viable Concrete Printing Process. International Association for Automation and Robotics in Construction (IAARC).

[6] Khoshnevis, B., Bukkapatnam, S., Kwon, H. and Saito, J. (2001) Experimental Investigation of Contour Crafting Using Ceramics Materials. Rapid Prototyping Journal, 7, 32-42.

https://doi.org/10.1108/13552540110365144

[7] Le, T.T., Austin, S.A., Lim, S., Buswell, R.A., Gibb, A.G.F. and Thorpe, T. (2012) Mix Design and Fresh Properties for High-Performance Printing Concrete. Materials & Structures, 45, 1221-1232.

https://doi.org/10.1617/s11527-012-9828-z

[8] Tay, Y.W., Panda, B., Paul, S.C., Tan, M.J., Qian, S.Z., Leong, K.F., et al. (2016) Processing and Properties of Construction Materials for 3D Printing. Materials Science Forum, 861, 177-181.

https://doi.org/10.4028/www.scientific.net/MSF.861.177

[9] Hambach, M. and Volkmer, D. (2017) Properties of 3D-Printed Fiber-Reinforced Portland Cement Paste. Cement & Concrete Composites, 79, 62-70.

https://doi.org/10.1016/j.cemconcomp.2017.02.001

[10] Kazemian, A., Yuan, X., Cochran, E. and Khoshnevis, B. (2017) Cementitious Materials for Construction-Scale 3D Printing: Laboratory Testing of Fresh Printing Mixture. Construction & Building Materials, 145, 639-647.

https://doi.org/10.1016/j.conbuildmat.2017.04.015

[11] Xia, M. and Sanjayan, J. (2016) Method of Formulating Geopolymer for 3D Printing for Construction Applications. Materials & Design, 110, 382-390.

https://doi.org/10.1016/j.matdes.2016.07.136

[12] Duan, Z.H., Kou, S.C. and Poon, C.S. (2013) Prediction of Compressive Strength of Recycled Aggregate Concrete Using Artificial Neural Networks. Construction & Building Materials, 40, 1200-1206.

https://doi.org/10.1016/j.conbuildmat.2012.04.063

[13] Al-Mutairi, N., Terro, M. and Al-Khaleefi, A.L. (2004) Effect of Recycling Hospital Ash on the Compressive Properties of Concrete: Statistical Assessment and Predicting Model. Building & Environment, 39, 557-566.

https://doi.org/10.1016/j.buildenv.2003.12.010

[14] Abdollahzadeh, G., Jahani, E. and Kashir, Z. (2016) Predicting of Compressive Strength of Recycled Aggregate Concrete by Genetic Programming. Computers & Concrete, 18, 155-164.

https://doi.org/10.12989/cac.2016.18.2.155

[15] Topçu, İ.B. and Sarıdemir, M. (2008) Prediction of Mechanical Properties of Recycled Aggregate Concretes Containing Silica Fume Using Artificial Neural Networks and Fuzzy Logic. Computational Materials Science, 42, 74-82.

https://doi.org/10.1016/j.commatsci.2007.06.011

[16] Gholampour, A., Gandomi, A.H. and Ozbakkaloglu, T. (2017) New Formulations for Mechanical Properties of Recycled Aggregate Concrete Using Gene Expression Programming. Construction & Building Materials, 130, 122-145.

https://doi.org/10.1016/j.conbuildmat.2016.10.114

[17] Šipoš, T.K., Miličević, I. and Siddique, R. (2017) Model for Mix Design of Brick Aggregate Concrete Based on Neural Network Modelling. Construction & Building Materials, 148, 757-769.

https://doi.org/10.1016/j.conbuildmat.2017.05.111

[18] Corporation, H.P. (2013) Optimizing the Mixing Proportion with Neural Networks Based on Genetic Algorithms for Recycled Aggregate Concrete. Advances in Materials Science and Engineering, 36, 680-685.

[19] Shin, Y.S. and Kim, G.H. (2013) Predicting Compressive Strength of Recycled Aggregate Concrete by Multiple Regression Analysis. Applied Mechanics & Materials, 253, 546-549.