Since its introduction by Euler in eighteen century, graph theory has proven its important applications in many different scientific fields. Graphs and linear algebra have been used to model social interactions. Recently network models are now commonplace not only in the hard sciences but also in various technological, social and biological scenarios. Networks are used to model a variety of highly interconnected systems, both in nature and man-made world of technology. These networks include protein-protein interaction networks, social networks, food webs, scientific collaboration networks, metabolic networks, lexical or semantic networks, neural networks, the World Wide Web and others. The use of network analysis is in various situations: from determining network structure and communities, describing the interactions between various elements of the network and investigating the dynamics phenomena taking place in the network .
One of the ground laying questions analysis of network is how to determine the “most important” nodes in a given network. Many centrality measures have been proposed, starting with the simplest of all, node degree centrality. This measure has being considered too “local”, as it does not take into account the connectivity of the immediate neighbours of the node under consideration. A number of centrality measures have been introduced that take into account the global connectivity properties of the network. These include various types of eigenvector centrality for both directed and undirected networks, Katz centrality, subgraph centrality and PageRank centrality . The use of centrality scores provides rankings of the nodes in the networks. The higher the ranking of a node, the more important the node is believed to be within the network. There are many different ranking methods in use, and many algorithms have been developed to compute these rankings.
The purpose of this paper is to discuss some of the centrality measures and analyse the relationship between degree centrality, eigenvector centrality, and Katz centrality and to discuss the measures of centrality based on matrix functions including the logarithmic, sine, cosine, exponential and hyperbolic function. The main aim is to determine which of the matrix functions is highly correlated to “standard” centrality measures. We will use the Kendall correlation coefficient in the experimental work to determine the correlations.
2. Literature Review
Bavelas introduces the application of centrality of networks in human communication by measuring the communication within a small group in terms of the relationship between the structural centrality and influence in a group process. Afterwards, an application of centrality was made under the direction of Bavelas at the Group Networks Laboratory, M.I.T in the late 1940s. Leavitt in 1949 and Smith in 1950 conducted a study on centrality measure on which Bavelas in 1950 and Bavelas and Barrett in 1951 reported. These experiments all concluded that centrality was related to group efficiency in problem-solving and agreed with the subjective perception of leadership .
Various centrality measures in various contexts were then explored in the following decade. Cohn and Marriott in 1958 attempted to use the centrality to understand political integration in Indian social life . Pitts examined the consequences of centrality in communication paths for urban development . Later, Czepiel used the concept of centrality to explain the pattern of diffusion of a technological innovation in the steel industry .
Recently, Bolland analysed the stability of the degree (DC), closeness (CC), betweenness (BC), and eigenvector (EC) centrality in random and systematic variations of network structure, and found that betweenness centrality changes significantly with the variation of the network structure, while degree and closeness centrality are usually stable. He also found that eigenvector centrality is the most stable of all the indices analysed . Borgatti and Frantz extended the studies on the stability of centrality indices by considering the addition and deletion of nodes and links as well as by differentiating several types of network topology such as uniformly random, small-world, core-periphery, scale-free, and cellular. Landherr reviewed critically the role of centrality measure in social networks . Estrada analysed examples of how a particular centrality measure is applied in social networks .
Benzi and Klymko analysed centrality measures, such as degree, eigenvector, Katz and subgraph for both undirected and directed networks. They used the local and global influence of a given node in the network as measured by walks of different lengths through that node. They analysed the relationship between centrality measures based on the diagonal entries and row sums of the matrix exponential and resolvent of the adjacency matrix involving the degree and eigenvector centrality. They showed experimentally that the rankings produced by exponential subgraph centrality, total communicability and resolvent subgraph centrality converge to those produced by degree centrality .
Most of the centrality measures notations considered are combinatorial in nature and based on the discrete structure of the underlying networks. We can extend our studies by defining the centrality measure by using the spectral techniques from linear algebra. Benzi and Klymko considered the diagonal entries of the matrix exponential and the Katz function where and is the adjacency matrix of the network . Though, none of the previous studies considered other matrix functions such as the logarithmic, cosine, sine, hyperbolic functions and the generalized Katz centrality as centrality measures. In this work we will develop the notions of centrality based on matrix functions, and we will use the Kendall correlation coefficient to determine the agreement between the node rankings produced by these matrix functions and those produced by the standard centrality measures.
3. Elements of Graph
Graphs are discrete structures which consist of vertices connected by edges. A graph can be written as , where is a non-empty set of vertices (also called nodes) and is a set of edges. An edge of the graph consists of two vertices associated with it, these two vertices are called endpoints. An edge starting and ending at the same vertex is called a loop.
・ Graphs can be represented by using points as nodes (vertices) and joining them using line segments for edges. We write uv to denote an edge between nodes u and v.
・ We can assign numerical values to the edges of a graph in which case the graph is referred to as weighted. In an unweighted graph we assign to every edge the value 1.
・ If the edges of the graph are directed (Figure 1), then the graph is called a directed graph or digraph otherwise it is called an undirected graph. An undirected unweighted (Figure 2) graph without loops is simple if no two edges connect the same pair of vertices.
・ For undirected graphs, if there are multiple edges between a pair of nodes then the graph is called a multi-graph or pseudo-graph. In digraphs we can have two edges connecting two vertices.
3.1. Basic Graph-Theoretic Terminology
If is an edge in an undirected graph G, then nodes u and v are incident to the edge uv and we say that u and v are adjacent (neighbours) in G. It follows that u and v are the endpoints of an edge uv. If G is the directed graph and , then u is said to be adjacent to v and v is said to be adjacent from u. We call u the initial node of uv and v the terminal or end node of uv. For an undirected graph G, the degree of a node is the total number of edges which are incident to the corresponding node .
The degree is the number of “immediate neighbours” of a node in G. In a regular graph all nodes have the same degree. Nodes in directed graphs have an in-degree and an out-degree. The in-degree of a node , , is the total number of edges with as their terminal node. Similarly,
Figure 1. Digraph.
Figure 2. Undirected.
the out-degree of , , is the total number of edges with as their initial node. Loops contribute 1 to both the in-degree and out-degree.
A graph is a subgraph of if and . We write , meaning that H is contained in G (or G contains H). If H contains all edges of G that join two vertices in W, then we say that H is a subgraph induced or spanned by W.
The subgraphs of can be obtained by deleting edges and vertices of G. We denote by with , the subgraph of G obtained by deleting the vertices in W and all the edges incident with those vertices. In a similar manner, we denote by where , the subgraph of G obtained by deleting all the edges in F.
3.2. Walk, Trail and Path
A walk of length k from node to node is a finite sequence of the form
where and are adjacent.
A trail in G is a walk in which no edges of G appear more than once (a walk with all different edges). A trail which begins and ends at the same node is known as a closed trail or circuit.
A path in G is a walk in which no nodes appear more than once with the exception that can be equal to . A cycle or a closed path is a path which begins and ends at the same node.
Two nodes and in G are connected if there is a path between them. We say that graph G is connected if for every pair of nodes and there exists a path that starts at and ends at . An edge in a connected graph is a bridge if G becomes disconnected if is deleted. If and are nodes of the directed graph G, then G is said to be strongly connected if for any two nodes and we can find a path from to and from to . It is weakly connected if there is a path between every two nodes in the underlying undirected graph. The undirected graph is obtained by ignoring the directions of the edges in the directed graph. All strongly connected directed graphs are also weakly connected.
The digraph in Figure 3 is strongly connected because there is a path between any two ordered vertices in the directed graph. The digraph in Figure 4 is not strongly connected, since, for example, there is no directed path from S to T, nor from V to T, but it is weakly connected.
4. Matrices in Graphs
This section discuses the way of representing graphs using matrices. There are multiple ways to do this and any graph can be represented by using either adjacency, incidence or Laplacian matrices.
Figure 3. Strongly connected.
Figure 4. Weakly connected.
4.1. Matrices for Undirected Graph
Given an undirected graph G with n vertices and m edges. The adjacency matrix of is given by , where
The incidence matrix is given by , where
The Laplacian matrix can be found by using the relation
where is a diagonal matrix whose ith diagonal entry is the degree of the ith node and is the adjacency matrix.
In other words, , where
Figure 5 is undirected graph with five nodes and six edges where there is a path from one node to the other nodes. The adjacency matrix , the incidence matrix and the diagonal matrix will be
Figure 5. Undirected Graph.
The Laplacian matrix will be
4.2. Matrices for Directed Graph
Given a directed graph G with n nodes and m edges. The adjacency matrix of is given by , where
The incidence matrix is given by , where
Figure 6 is a digraph which shows that there is path from node A to any other node of the graph, nevertheless there is no path from node C to any other node of the network. The adjacency matrix and the incidence matrix will be
Figure 6. Directed Graph.
4.3. Distance in Graphs
Geodesic distance denoted as between two nodes and in the graph G is defined as the length of the shortest path between nodes and . The diameter of the graph is given as
Geodesic distances in digraph Q in Figure 3:
The distance matrix of a graph, denoted as , is the square matrix ,
where is equal to the length of the shortest path from node to node .
Consider the undirected unweighted graph in Figure 7 as example;
The distance matrix for the graph in Figure 7, is given as;
4.4. Perron―Frobenius Theorem
We will state (without giving the proof) the Perron-Frobenius theorem which will be used later on in our work.
Theorem 1 (Perron-Frobenius theorem). Let be an irreducible matrix. Then the Perron-Frobenius theorem states that :
・ has a principal eigenvalue such that all other eigenvalues , for , satisfy
Figure 7. Unweighted-Undirected Graph.
・ The principal eigenvalue has algebraic and geometry multiplicity of 1, and has a right eigenvector with all positive elements, i.e. and a left eigenvector with all positive elements, i.e. .
・ Any non-negative right eigenvector is a multiple of , any non-negative left eigenvector is a multiple of .
Furthermore, if is the adjacency matrix of a directed network with a strongly connected component, then
・ has a principal eigenvalue such that all other eigenvalues , for , satisfy
・ The principal eigenvalue has algebraic and geometric multiplicity equal to 1, and has a left eigenvector with non-negative elements, i.e. if node i belongs to the strongly connected component of the network or the out-components of the network, and if node i belongs to the in-component of the strongly connected component of the network.
5. Centrality Measures
Centrality of a given node is a measure of the importance and influence of that node in the corresponding network. The identification of which nodes are more important or central than the others is a key issue in network analysis. We can ask the following questions:
・ Which are the most central nodes in a network?
・ Which are the most important nodes in a network?
・ Which are the most influential nodes in a network?
These types of questions can have different interpretations in different networks. For instance;
・ when dealing with a social network, the most central node can be the most popular person,
・ when dealing with a web portal network, the most central node can be a web page with the best quality of content in a specific field,
・ in terms of the internet network, the most central node might be a network gateway (router) with the highest bandwidth.
These ideas can be used to characterize types of centrality measure to find the most important nodes in a network in a given context. That is, there are many different centrality measures. When measuring the centrality of the node, we should be sure that:
・ we know what each centrality measure means;
・ what they measure well; and
・ why a particular centrality measure is the most appropriate for the kind of set we are investigating.
The most common centrality measures include degree centrality, betweenness centrality, eigenvector centrality, Katz centrality, PageRank centrality, closeness centrality and subgraph centrality .
5.1. Degree Centrality
Degree centrality of a node in a given network is given by the total degree of . The degree centrality measures the ability of a node to communicate directly with other nodes.
In an undirected network, the degree of a node is given as
where is the adjacency matrix, is the ith standard basis vector (ith column of the identity matrix) and is the vector of all entries one.
In a directed network, we can consider the in-degree of a node, given as
or the out-degree of the node, given as
In a directed graph, a source is a node with zero in-degree and a sink is a node with zero out-degree.
As an example, consider the undirected graph in Figure 8. We are interested in finding the central node using degree centrality.
The adjacency matrix for Network-1 in Figure 8 is given as
The degree centralities are contained in the vector ;
Figure 8. Network-1.
Using the degree centrality measure, node 1 is the most central node due to the fact that it has the highest degree.
5.2. Closeness Centrality
Closeness centrality measures the average shortest path length from a node to all other nodes. It uses neighbours and the neighbours of neighbours of a node to determine its centrality. Thus, nodes that are not directly connected to are taken into consideration as opposed to the degree centrality case.
Letting be the length of the shortest path from node i to node j, the mean distance from node i to the other nodes in a network is given by;
where N is the total number of nodes and denotes the distance matrix.
In general, we want to associate high centrality score with important nodes. So we will use the reciprocal of as the value of the centrality. Thus, the closeness centrality for a node is given by:
For example, consider Network-1 in Figure 8.
The distance matrix is given by
Thus, , gives
Now, nodes 1 and 2 are identified as the most important nodes in the network.
5.3. Eigenvector Centrality
In an undirected network which is connected we can write the measure of centrality by using the eigenvector centrality. Eigenvector centrality takes into consideration the importance of neighbours of a node. In degree centrality a node is awarded one centrality point for each neighbour. Eigenvector centrality gives each node a score which is proportional to the sum of the score of its neighbours. In eigenvector centrality, a node is important if it is linked to other important nodes. The larger an entry on the node, the more important the node is considered to be.
From the Perron-Frobenius theorem, the eigenvector associated to the principal eigenvalue of the adjacency matrix is unique if the network is strongly connected. We define the centrality of a node iteratively by using the sum of its neighbours’ centralities. We initially assume that a node j has centrality . Then we calculate a new iteration as the sum of the centralities of i’s neighbours.
where are the entries of the adjacency matrix.
In matrix form we write this as:
After k-steps, we have
Note that the eigenvector centrality is defined as .
We can write as a linear combination of eigenvectors of the adjacency matrix , that is,
where the are constants.
Then, from Equation (8), we have
where is the eigenvalue associated with the eigenvector and is the principal eigenvalue.
Since , for , we have;
This implies that the limiting centralities are proportional to the principal eigenvector of the adjacency matrix.
Therefore, in matrix form, the eigenvector centrality satisfies
Note that in eigenvector centrality, the higher the centrality of the neighbours of the node, the more important the node is.
For example, consider the network in Figure 9.
The adjacency matrix for Network-2 in Figure 9, is given by
Figure 9. Network-2.
The eigenvalues of the adjacency matrix are
The principal eigenvector corresponding to the principal eigenvalue is
Since the eigenvector centralities ( ) correspond to the principal eigenvector, the eigenvector centralities for Network-2 will be
Using the eigenvector centrality, we conclude that node 7 is the most important node.
5.4. Katz Centrality
Katz centrality takes into consideration both the number of direct neighbours and the further connections of a node in the network. That is, a node is important in Katz centrality if it has universal connections to other nodes in the network. Katz centrality takes into account all paths of arbitrary length from a node i to other nodes in the network.
The Katz centrality is given by
where is the column vector of ones, is called the attenuation factor and is the adjacency matrix of the network. We can expand Equation (14) as
and, if the sum converges, then
where I is identity matrix and is the column vector of ones.
To ensure the convergence of and an accurate definition of Katz centrality, we must consider the attenuation factor to be within the range
For example, we compute the Katz centrality of Network-2 in Figure 9. Since
the principal eigenvalue is , then . To calculate the
Katz centrality, we choose , then the Katz centrality will be
Therefore, node 7 is the most important node.
5.5. Subgraph Centrality
Subgraph centrality attempts to measures the centrality of a node by taking into consideration the participation of each node in all subgraphs of the network. It does this indirectly by counting the number of closed walks in the network which start and end at a given node in the network: a relationship can be shown between subgraphs and these walks.
If is the adjacency matrix of an unweighted network, we know that
・ corresponds to the number of closed walks of length k starting at node .
・ corresponds to the number of walks of length k that start at node and end at node .
In a similar way to Katz centrality, subgraph centrality of a node i is a weighted sum of closed walks of different lengths which start and end at node i. The shorter the closed walk, the more the centrality of the node is influenced.
The subgraph centrality of node i in the network is given by
Considering the exponential of the adjacency matrix,
we observe that the numbers of closed walks associated with of length 2 are counted twice for every link in the network. On the other hand, the closed walk associated with of length 3 is counted for every triangle. To avoid double counting, we penalize the closed walks of length 2 by and closed walks of length 3 are penalized by . In general, any circuit of length k can be traversed in 2 directions and there are k points where you can start, so that this circuit is counted times. From the exponential of the adjacency matrix, when , the penalization included is not the same as the penalization of counting the number of repeated closed walks.
For instance, let us consider Network-2 in Figure 9. We want to find the subgraph centrality. We have;
Then the subgraph centrality will consist of the diagonal entries of , which is
We observe that node 7 has the highest subgraph centrality, thus, node 7 is the most central node.
6. Relationship between Centrality Measures
Among the challenges that arise in determining the importance of a node in a network using centrality is that it is not always clear which of the centrality measures should be used. It is not obvious whether two centrality measures will give the same ranking of the nodes in the given network. Also, there is the necessity of choosing the attenuation factor in Katz centrality which adds another challenge. Different choices of may lead to different rankings. Experimentally, it has been seen that different centrality measures provide highly correlated rankings . Ranking becomes more stable when approaches its limits, i.e. as
We will prove these correlations and stability of ranking. This will relate the degree and eigenvector centrality to Katz centrality.
Theorem 2. Let be undirected connected network with adjacency matrix . The Katz centrality is given as
・ as , the ranking produced by converges to that produced by degree centrality, and
・ as , the ranking produced by converges to that produced by eigenvector centrality.
Proof. The Katz centrality is given as
which can be written as
where is the vector of the degree centralities of the nodes.
Consider the relation
It is clear that the ranking produced by will be exactly the same as that produced by , due to the fact that the score of each node has been scaled and shifted in the same way. Thus,
where is the vector of the degree centralities of the nodes.
Therefore, the ranking produced by the Katz centrality reduces to that produced by degree centrality.
To show the second relation, we write the column vector , as
where are constants and are eigenvectors of matrix .
Then, we can write the Katz centrality as
Consider the relation
That is, the ranking produced by is exactly the same as that produced by , due to the fact that the score of each node has been scaled and shifted in the same way. This implies that
This implies that the limiting centralities are proportional to the principal eigenvector of the adjacency matrix. Thus, the ranking produced by the Katz centrality reduces to those produced by eigenvector centrality.
7. Matrix Functions
This section discusses some of the matrix functions developed using Taylor series.
Matrix functions have applications throughout applied mathematics and scientific computing. Matrix functions are used in various fields, for example, in control theory and electromagnetism and can also be used to study complex networks like social networks.
Let be a complex-valued function, such that and is analytic in the disc , where . Using Taylor’s theorem, we can represent as a convergent power series
for , and are complex-valued constants .
Let be a complex-valued matrix. Then we define the matrix function of as
The matrix series in Equation (32) is convergent to the matrix if all n2 scalar series that make up are convergent. It turns out that the series of converges if all eigenvalues of lie in the region of convergence of in Equation (31). This can be proved by the following theorem.
Theorem 3. Suppose that has a power series representation, written as
in an open disc . Then the series
is convergent if and only if the eigenvalues of lie in .
Proof. We prove this theorem only for diagonalisable matrices using the Jordan form of matrix .
Let be a transformation matrix which diagonalizes . Then we can write
If has an eigenvalue , then the series in Equation (33) diverges when evaluated at . It follows that the series in Equation (34) also diverges. That is, if there exist eigenvalues of matrix which fall outside , then the series in Equation (34) diverges.
Therefore converges if and only if , where are the eigenvalues of matrix .
In general, if the function can be expressed by using Taylor series and it converges in the disc which contains the eigenvalues of , then can be computed by substituting the matrix for variable z in the function . For instance,
The most important matrix functions which can be expressed by using the Taylor series are the following:
・ Exponential function
・ Cosine function
・ Sine function
・ Logarithmic function
・ Hyperbolic function
1) sinh function
2) cosh function
Each of these functions can (in theory) be used to define a centrality measure on a network with adjacency matrix . For example, to obtain the centralities of all the nodes we can compute , where is the vector of ones, or .
We may need to be careful with these raw centrality measures as may contain negative (or even complex) entries. For instance, to compute the logarithmic function , we need to take care of the complex entries, since it is not possible to make ranking out of complex entries. To avoid the complex entries, we compute , where is a real constant chosen so that has positive eigenvalues. The constant differs for different networks.
We can also define centrality measures by applying analytic continuations of outside its radius of convergence.
Recall that, if the attenuation factor , where is the principal eigenvalue of , then
is also defined for as long as where is an eigenvalue of . Then we can generalize Katz centrality by the following definition:
where is the column vector of ones and is the spectrum of .
To determine which of the matrix functions can be used to asses centrality in the network, we will do some experimental work on a variety of networks in the following section. We will perform the experimental work by making comparisons between the rankings based on the common centrality measures discussed in section V and the rankings based on these matrix functions.
8. Experimental Work and Discussion
In this section we aim to analyse experimentally the agreements between the centrality measures discussed in section V, and whether the matrix functions discussed in section VII can be used to determine the important nodes in a network. The experimental work will compare matrix functions to the common centrality measures.
A variety of techniques can be used to compute centrality measures (those discussed in section V) and matrix functions. To compute the exponential of a matrix, logarithmic of a matrix and other matrix functions we will use SciPy matrix functions . In our new measures involving matrix functions and generalisations to Katz centrality, we will calculate centralities by using the diagonal entries of these functions. We will use the Kendall correlation coefficient in our experiments to compare the agreement between centrality measures.
8.1. Correlation (Kendall, Pearson, Spearman) Coefficient
Correlation is a bivariate analysis that measures the strengths of association between two variables. The value of the correlation coefficient varies between 1 and −1. The positive correlation signifies that the ranks of both variables are increasing, while the negative correlation signifies that as the rank of one variable increases, the rank of the other variable is decreasing. The correlation coefficient between the two variables is said to be a perfect association if it lies between ±1 . The closer the value of the correlation coefficient to 1 or to −1, the stronger the relationship between the two variables. As the correlation coefficient value goes towards 0, the relationship between the two variables will be weaker. In statistics, we usually measure the strengths of association by: Pearson correlation, Kendall rank correlation and Spearman correlation.
The Kendall coefficient of correlation is the measure of the degree of correspondence between two set of ranks given to the same set of objects. The Kendall coefficient is interpreted as the difference between the probability of these objects being in the same order and the probability of these objects being in a different order .
Let X and Y be two observations, such that is the set of joint random variables of X and Y, respectively. The values and are all unique.
・ Any pair of observations and is said to be concordant if
which implies that either and or and .
・ The pair is said to be discordant if
which implies that either and or and .
・ The pair is neither concordant nor discordant if or .
The Kendall correlation coefficient is defined as
The Kendall rank coefficient can be interpreted as follows: the values of greater than zero show an agreement, being close to one indicates a strong agreement. On the other hand, values less than zero show a disagreement and those close to negative one indicate a strong disagreement . Indeed, if all pairs are concordant, then , which implies that the variables are in exactly the same ranking (order). If they are all discordant then , which implies that the variables are in exactly the opposite ranking (order).
Pearson correlation is a measure of degree of linear relationship between two variables and is denoted by r. Pearson correlation is basically used to draw a line of best fit through the data of two variables, r indicates how far away all these data points are to the line of best fit.
The following formula is used to calculate the Pearson r correlation:
r = Pearson correlation coefficient
N = number of observation in each data set
= the sum of the products of paired scores
= sum of scores of variable x
= sum of scores of variable y scores
= sum of squared scores of variable x
= sum of squared scores of variable y
To use Pearson correlation r, the two variables must be measured either in interval or ratio scale. However, both variables do not need to be measured on the same scale (for instance, one variable can be ratio and one can be interval). We can not use the Pearson correlation for ordinal data, instead we use Spearman’s rank correlation or a Kendall’s Correlation.
Spearman rank correlation is the nonparametric version of the Pearson correlation coefficient that is used to measure the degree of association between two two continuous or ordinal variables.
The following formula is used to calculate the Spearman rank correlation:
where; is the difference between the two ranks of each observation; N is the number of observations.
We use the Spearman correlation coefficient when the relationship between variables is not linear.
Despite the fact that both Spearman and Kendall correlations measure monotonicity relationships and have a nice interpretation but in this paper we will opt to use the Kendall correlation coefficient due to the following reasons :
・ The distribution of Kendall’s has better statistical properties.
・ The interpretation of Kendall’s in terms of the probabilities of observing the agreeable (concordant) and non-agreeable (discordant) pairs is very direct.
・ The Kendall correlation has a smaller gross error sensitivity (GES) (more robust) and a smaller asymptotic variance (AV) (more efficient), that is Kendall correlation has a computation complexity comparing with of Spearman correlation, where n is the sample size.
8.2. Network Models
Networks have been around us for so many years and the study is not new. Graph theorists and mathematicians have been surrounded by problems where they were trying to make sense of these complex networks. As a result of this, random network theory was generated stating that nodes and links in a graph are connected randomly to each other. In this paper we will consider three networks model due to its significance:
・ Erdös-Rényi model: are formed by completely random interactions between the nodes. Each node chooses its neighbours at random, constrained either by an overall number of relationships that might be assigned in the graph, or a probability of connecting to a certain neighbour . Mathematically, each network would be following a poison distribution. This distribution is such that vast majority of nodes have equal number of links and it is almost impossible to find outliers.
・ Barabási-Albert model: these are scale-free networks which are formed by two simple mechanisms, growth and preferential attachment. The main prediction which a scale free network makes is the presence of few outlier nodes which have many connections. These nodes are also known as hubs. Preferential attachment is a probabilistic mechanism in which a new node is free to connect to any node in the network, whether it is a hub or has a single link .
・ Watts-Strogatz model: is important because it shows how the “small-world effect” in networks can coexist with other commonly observed features of social networks, like a high clustering coefficient. More specifically, the model showed how adding a small fraction of random long-range links in an otherwise regular network can lead to slow, logarithmic scaling of the typical distance between nodes with network size .
8.2.1. First Experiment
We begin our experiments by considering a small network with 20 nodes. The network was randomly generated in text editor and drawn using Sage. The aim is to determine which of the matrix functions give similar rankings as the common centrality measures. We have many functions to choose from and we want to limit our choice. Note that we will not consider the exponential functions since it is similar to the subgraph centrality.
The experiment shows that the diagonal entries of the logarithmic function and cosine function give the ranking of the nodes in reverse order as compared to other rankings. Also, we observe that the sine function does not match any other centrality measure. The network in Figure 10, having 20 nodes and 42 edges gives us a real picture on node ranking.
Table 1. Rankings of nodes using centrality measures and matrix functions for the network in Figure 10.
Figure 10. Network with 20 nodes and 42 edges.
Note that the ranking is from the most to the least important/central node with respect to the centrality measure used.
To avoid making many comparisons using Kendall correlation coefficient between the common centrality measures and those produced by matrix functions, we will choose one centrality measure among the other one. To do this we need to investigate whether the chosen centrality measure agrees with the other centralities measures. In this case, we make a comparison between the closeness centrality (CC) and degree centrality (DC), eigenvector centrality (EC), Katz centrality (KC) and subgraph centrality (SC) for graph in Figure 10.
Table 2 shows that there is an agreement between closeness centrality and other centrality measures.
We observe from graph of Figure 11 that there is an agreement between closeness centrality and other centrality measures.
In Table 1, we have to modify the cosine and logarithmic functions so that they match with other rankings. The best way of doing this seems to be by reversing the order of their rankings.
To be more confident about the rankings of nodes using matrix functions, we will use the Kendall correlation coefficient to make the comparison between closeness centrality and the matrix functions. We chose closeness centrality among the other standard centrality measure to make the comparison with matrix functions in as much as it takes into account neighbours and the neighbours of neighbours of a node to determine its centrality. In the comparisons, we denote by the Kendall coefficient between closeness centrality and centrality measure induced by .
In Table 3, we reversed the rankings given by the cosine and the logarithmic functions before calculating the Kendall coefficients. We observe in Table 3 that the Kendall coefficients , , and are all positive. This implies the agreement of these matrix functions with the
Table 2. Kendall coefficients between closeness centrality and other centrality measure applied to graph in Figure 10.
Table 3. Kendall coefficients between closeness centrality and matrix functions applied to graph in Figure 10.
Figure 11. Agreement.
closeness centrality measure and the agreement is quite strong for logarithmic, cosh, and cosine functions since their Kendall coefficients are close to 1. On the other hand, the Kendall coefficient between closeness centrality and sine function is negative, which implies that there is no agreement between their rankings.
8.2.2. Second Experiment
We compare the agreement of centrality measures, by generating 10 random networks using the Barabási-Albert preferential attachment model.
The Barabási-Albert model is a simple scale-free random graph generator. The network begins with an initial set of nodes. The degree of each node in the initial network should be at least 1, if not, the network will always end up being disconnected. New nodes are free to attach to an existing node in the network. At each step, a new node is created and connected to an existing node. Each new node is connected to existing nodes with a probability that is proportional to the number of links that the existing nodes already have. To use this method, we specify the number of nodes in the network (n) and the number of new nodes form as they appear (m) in such a manner that nodes with higher degree have a higher chance of being selected for attachment .
The comparisons involve rankings of nodes using centrality measures such as closeness centrality, degree centrality, eigenvector centrality, Katz centrality and subgraph centrality. We use in this experiment to compute
Katz centrality. In all cases we take . For each choice of n and m, we
generate 5 networks and record the mean values of the Kendall coefficients. We denote the Kendall coefficient correlation by according to the Table 4.
It is evident from Table 5, that all Kendall coefficients are positive, which indicates an agreement between the rankings. The Kendall coefficient shows that
Table 4. The notations of Kendall correlation coefficients.
Table 5. Kendall coefficients for centrality measures applied to different random networks generated by using the Barabási-Albert method.
n: Number of nodes in the network. m: Number of neighbours to attach to each new node.
closeness centrality is highly correlated with centrality measures corresponding to , and . The eigenvector, Katz and subgraph centralities are also highly related, as indicated by , and . The experiment shows that the agreement becomes stronger if the network is more connected. In general, we say that for sufficiently dense (i.e., very connected) networks, the two measures provide almost identical rankings, producing Kendall correlation coefficients close to 1.
8.2.3. Third Experiment
We generate 10 random networks as in the second experiment. This time, we fix the value of n to be 200 and we vary m. We calculate the Katz centrality for each network using different choices of . Recall, the Katz centrality of the nodes is given by . We choose
We calculate the Kendall correlation coefficients between all pairs of measures as denoted in Table 6.
Note that the Kendall coefficient involving and are computed by taking the absolute value of the inverse function, that is and the other coefficient are computed by using the normal formula.
We observe in Table 7 that the Kendall coefficient between and is exactly 1. The Kendall coefficient between and both and , and both and are the same. Since and correspond to the generalised
Katz centrality (i.e. ), then we conjecture that the rankings provided by any generalised Katz centrality are always the same regardless of the choice of , this is when .
8.2.4. Fourth Experiment
We repeat the second experiment involving generating 10 random networks with different number of nodes by using the Erdös-Rényi method. We will use the same notation for Kendall coefficients as we used in the second experiment, see Table 4.
The Erdös-Rényi model is used to generate random networks in which edges are set between nodes with equal probabilities. The model can be used to prove
Table 6. The notations of Kendall correlation coefficients.
Table 7. Kendall coefficients for generalized Katz centrality applied to random networks generated by using the Barabási-Albert method.
the existence of networks satisfying various properties, it can also be used to provide a rigorous definition of what it means for a property to hold for almost all networks . To generate random networks using the Erdös-Rényi model, we need to specify two parameters: the number of nodes in the network denoted by n and the probability p that a link should be formed between any two nodes .
The Kendall coefficients in Table 8 are all positive and they are close to 1. Using the concept of Kendall coefficients, we say that rankings of nodes using centrality measures for random networks generated by using the Erdös-Rényi are highly correlated. This implies that there is a strong agreement in their rankings.
We repeat the third experiment but this time, generating 10 random networks with 200 nodes by using the Erdös-Rényi method. We will use the same definition of Katz centrality and the same notation as in Table 6.
Table 9 shows that there is a high agreement between the rankings of and . On the other hand, we observe that the rankings of the nodes produced by
Katz centrality when and those produced by generalised Katz centrality, when disagree, as we see from Table 9 that the Kendall coefficient and are approximately zero.
8.2.5. Fifth Experiment
We repeat the second experiment, but now with 10 random networks generated by using the Watts-Strogatz method. The same notation for the Kendall coefficient will be used as that used in the second experiment, see Table 4.
Table 8. Kendall coefficients for centrality measures applied to different random networks generated by using the Erdös-Rényi method.
n: Number of nodes in the network. p: Probability of edge creation.
Table 9. Kendall coefficients for generalised Katz centrality applied to 10 random networks with 200 nodes generated by using the Erdös-Rényi method.
n: Number of nodes in the network. p: Probability of edge creation.
The Watts-Strogatz model was developed as a way to impose a high clustering coefficient onto classical random graphs. It produces networks with a small-world property. To generate these networks, we use watts_strogatz_graph(n, k, p) in Sage. Here, n denotes the number of nodes in the network which are arranged in a ring and connected to k nearest neighbours in the ring. Each node is considered independently and, with probability p, a link is added between the node and one of the other nodes in the network, chosen uniformly at random in accordance with experiments detailed in .
In our experiment, we varied n,k and p and in each case, five networks were created. The averages of the Kendall coefficients over these 5 networks for different centrality measures are given in Table 10. These Kendall coefficients are computed for the complete set of rankings. The Kendall coefficients show that the agreement between the centrality measures are much weaker than for the networks produced by Barabási-Albert and Erdös-Rényi methods. The experiment also shows that as the network becomes denser the correlation between measures becomes stronger.
We then repeat the third experiment using the Watts-Strogatz method. The same definition of Katz centrality and the same notation were used as in Table 6.
8.2.6. Sixth Experiment
The aim of this experiment is to use the three methods (Barabási-Albert, Erdös-Rényi and Watts-Strogatz) for generating random networks and to use
Table 10. Kendall coefficients for centrality measures applied to random networks generated by using the Watts-Strogatz method.
n: Number of nodes in the network. k: Each node is connected to k nearest neighbours. p: Probability of rewiring each edge.
Table 11. Kendall coefficients for generalised Katz centrality applied to 10 random networks with 200 nodes generated by using the Watts-Strogatz method.
the Kendall correlation coefficient to see whether there will be an agreement between the closeness centrality and matrix functions, such as the logarithmic, cosh, sinh, cosine and sine functions. Using each method, we generate 10 random networks and for each network we create five networks in which the Kendall coefficient will be obtained by taking the average over these 5 created networks. The aim is to see whether the pattern observed in our first experiment is repeated.
Table 12 shows that there is an agreement between closeness centrality and matrix functions (logarithmic, cosh and sinh) in their ranking measures for
Table 12. Kendall coefficients between closeness centrality and matrix functions for 10 random networks generated by using the Barabási-Albert method.
networks generated by using the Barabási-Albert method. On the other hand, the agreement between closeness centrality and other matrix functions such as cosine and sine is weak.
We now generate 10 random networks by using the Erdös-Rényi and Watts-Strogatz methods.
Table 13 shows that networks generated by using the Erdös-Rényi method agree in the ranking measures given by closeness centrality and matrix functions such as logarithmic, cosh and sinh.
Table 14 shows that the agreement between closeness centrality and the matrix functions (cosh, sinh, cosine and sine) are weak in many cases. When we use the Watts-Strogatz method to generate the networks, the logarithmic function, gives a ranking similar to closeness centrality when .
In general, the agreement between closeness centrality and hyperbolic functions (cosh and sinh) is not strong for the networks generated by using the Watt-Strogatz methods. The agreement between closeness centrality and the cosine function as well as the sine function in networks generated by using Barabási-Albert, Erdös-Rényi and Watts-Strogatz methods are all weak. Among the tested matrix functions, the logarithmic function gives the best agreement of ranking with other centrality measures.
8.2.7. Real-World Network Experiments
In this experiment, we will study the Kendall correlation coefficient for real-world networks. The networks in this experiment come from a variety of sources, some of these data we have obtained from Gephi sample datasets and others are found in Pajek Data Sets . We will compare only some of the centrality measures (closeness, subgraph and Katz) and the logarithmic matrix function. Note that we are not interested in the meaning of each node within the
Table 13. Kendall coefficients between closeness centrality and matrix functions for 10 random network generated by using the Erdös-Rényi method.
Table 14. Kendall coefficients between closeness centrality and matrix functions for 10 random networks generated by using the Watts-Strogatz method.
network, what we really want to know is whether there is an agreement between these centrality measures in real-world networks. We chose the logarithmic function over other matrix functions since it shows a high agreement with other centralities when applied to random networks.
We can clearly see from Table 15 that all Kendall coefficient are positive. This implies that there is an agreement between the rankings of nodes. We also observe that the agreement between centrality measures (closeness, Katz, subgraph) and the logarithmic function is high, irrespective of the connectivity of the underlying network. In general, we can say that the logarithmic function is the best of the tried matrix functions and can be used as a centrality measure, since it gives a ranking similar to rankings of other centrality measures.
Table 15. Kendall coefficients for the logarithmic function and some standard centrality measures as applied to 10 real-world networks.
In this work we examined the centrality measures such as closeness, degree, eigenvector, Katz and subgraph. We showed the relationship between the Katz centrality and eigenvector as well as degree centrality. We developed our notion of centrality measure by considering the rankings of nodes based on matrix functions such as logarithmic, cosine, sine, hyperbolic functions and the generalised Katz centrality. We showed experimentally by using various classes of graphs that the rankings of the nodes given by closeness, degree, eigenvector, Katz and subgraph centrality are highly correlated. Moreover, we showed experimentally that the rankings of nodes given by different choices of attenuation
factor for the generalised Katz centrality, in which where is the
principal eigenvalue of the network adjacency matrix , are exactly the same. In terms of matrix functions, the experiment shows that there is a high agreement between the rankings of nodes given by the logarithmic function and other common centrality measures discussed in Section V.
Similar results were found to hold for real-world networks: the rankings given by the logarithmic function and those given by closeness, Katz and subgraph centrality are highly correlated irrespective of the connectivity of the network. In general, we concluded that the logarithmic function, out of the matrix functions we have considered, is the best and can be used as a centrality measure.
In this work, we considered only the diagonal entries of the matrix functions with some modifications of the calculations of their Kendall correlation coefficients. We also found that the logarithmic function gives a relatively good ranking as compared to the rankings given by other centrality measures. We suggest that in the future, we can consider the row sums of these matrix functions with or without any modification and examine whether they give similar rankings as other centrality measures. The paper did not analyse the significance and the uses of the centrality measures based on matrix and it has been left for future work.