GraphSLAM: why are constraints imposed twice in the information matrix?
I was watching Sebastian Thrun's video course on AI for robotics (freely available on udacity.com). In his final chapter on GraphSLAM, he illustrates how to setup the system of equations for the mean path locations $x_i$ and landmark locations $L_j$.
To setup the matrix system, he imposes each robot motion and landmark measurement constraint twice. For example, if a robot motion command is to move from x1 by 5 units to the right (reaching x2), i understand this constraint as
However, he also imposes the negative of this equation $$x_2-x_1=5$$ as a constraint and superimposes it onto a different equation and i'm not sure why. In his video course, he briefly mentions that the matrix we're assembling is known as the "information matrix", but i have no why the information matrix is assembled in this specific way.
So, I tried to read his book Probabilistic Robotics, and all i can gather is that these equations come from obtaining the minimizer of the negative log posterior probability incorporating the motion commands, measurements, and map correspondences, which results in a quadratic function of the unknown variables $L_j$ and $x_i$. Since it is quadratic (and the motion / measurement models are also linear), the minimum is obviously obtained by solving a linear system of equations.
But why are each of the constraints imposed twice, with once as a positive quantity and again as the negative of the same equation? Its not immediately obvious to me from the form of the negative log posterior probability (i.e. the quadratic function) that the constraints must be imposed twice. Why is the "information matrix assembled this way? Does it also hold true when the motion and measurement models are nonlinear?
Any help would be greatly appreciated.