Once we understand what is meant by Rosennean complexity, it becomes relatively easy to see that error and emergence are closely related concepts. In particular, complexity, error and emergence all have their basis in being some kind of deviation of a natural system’s actual behavior from an expected behavior derived from some model(s) of the system.
Therefore, as with complexity, error and emergence are not intrinsic properties of a natural system, nor are they properties of the models involved. Instead, these characterizations occur as a result of the manner in which the model(s) deviate from the system under study. These concepts fit naturally with our intuitive notion of error and emergence, just as the concept of Rosennean complexity reflects our intuitive notion of complexity.
As a reminder, we recall that when a Modeling Relation occurs between a natural system and a formal system, that the encoding process involves taking measurements of the natural system and representing those measurement results in some abstract form, generally mathematical (e.g., a number). Further, the features or qualities of the system that we measure are called observables.
The concept of “error”
In order for the concept of error to be meaningful, there must be some criteria by which to assert the occurrence of error. That is, there must be the invocation of some expectation of the observed behavior of the system, along with a failure to meet that expectation. Without that, there is no sense in which “error” occurs. 
Of course, the comparison of expected behavior with observed (measured) behavior is simply a loose way of describing a Modeling Relation. We then understand error to refer to a situation in a Modeling Relation where there occurs a discrepancy or bifurcation of the formal model’s predictions from the actual behavior measured in the system.
At this point, we might suggest that the reason for the error is that the model needs revising or replacing, such that an updated model’s predictions would no longer bifurcate from actual system behavior. And this may indeed be appropriate. However, there are additional considerations beyond the model itself.
The act of measurement of observables carries with it limitations of its own. As Rosen points out:
“In empirical terms, the concept of error arises already at the most basic level; namely, in the operation of those meters or measuring instruments through which we formalize the notion of an observable. It is universally recognized that any measuring instrument is subject to “noise” or variability; or, stated another way, that every measuring instrument has a finite resolving power, below which it cannot discriminate.”
Below the resolving power of the measuring instrument, any variations in the observables will be undetectable. In other words, the system may have subtleties in behaviors which the measuring instrument cannot detect. Another way to say this is that the system can exhibit degrees of freedom that are invisible to the observer. 
In addition to this, the numbers into which measurements are abstracted have the characteristic of possessing a limited number of decimal coefficients.  It is the replacement of a real number with a truncated value: an approximation. Once again, we can view this as the system having yet other degrees of freedom that are rendered invisible to the observer.
The invisibility of degrees of freedom can also be characterized in terms of the model: we can say that the model is closed to certain interactions to which the system is open. That is, we can think of error as arising from a bifurcation between the behavior in a system open to certain interactions and the behavior of another system closed to those interactions. 
This last comment should remind us that the definition of a natural system is an artificial partitioning of the physical world per our judgment. In our model, this partitioning is apparent in the very limited and select degrees of freedom the model is allowed to have with any “environmental” influences. On the other hand, we cannot fully isolate a portion of the physical world, and the natural system is potentially open to an unknown number and variety of environmental interactions. This, then, is the source of potential variability in the natural systems behavior. It is when this underlying variability then manifests in behaviors which bifurcate from the behavior of the model that error arises. 
So, error involves not only the model that is utilized, but it also involves the nature of the concept of system, and the nature of the measurement process, in very fundamental ways. However, tacit in the above discussion, we have maintained that as the system evolves through time, the bifurcation will involve bifurcations only in the values of the observables, not bifurcations in the set of observables themselves. In other words, there is a tacit assertion that as the system evolves, it always remains describable by one fixed set of observables and a single set of states. 
What is termed emergence is the inability to describe the evolution of a system by a single set of observables and states.  In a sense, emergence is simply a continuation of the notion of error as a bifurcation of behavior between a system and a model.  In the case of emergence, though, the nature of the bifurcation is such that the current model must be replaced with a different model in order to capture the new set of states and observables of the system.
Once again, we see that emergence is also not an intrinsic feature of a system, or of a model.  Rather, emergence refers merely to a certain type of relationship between a system and its models.
The mystery of emergence
That emergence requires that we pass from one system description to another as the system evolves seems to confound and perplex us. Inside each model, an inferential structure exists that we can use to ask and answer questions about the system under study. However, when we pass from one model to another, there is no apparent inferential entailment between the models. In other words, there seems to be no reason why the system forces us to change descriptions. And here is the source of the apparent mystery of emergence.
However, this view is flawed.
If we go back to the basics of the Modeling Relation, we recall that the construction and implementation of a model is based around establishing a congruence between the observables and linkages in the system with those in the proposed model. That is, we generally allow the existence of the observables and states of the system to direct the specific nature of the model that is constructed. Another way to say this is that if we ask “why the model?”, one answer is: “because the system states and observables”.
Likewise, in the case of emergence, the inferential link we seek is not properly located between the old model and the new model, but it is instead still located in the structure of the Modeling Relation itself. So, when we ask “why the new model?” the answer continues to be: “because the system states and observables”.
Upon inspection, we can see that the attempt to answer the question “why the new model?” in terms of relations between the old and new model is actually an implicit attempt to force the two models to be two parts of one larger model. However, it is just this inability to combine or concatenate these models in any way that led to a need to switch models in the first place; otherwise, we would have simply used the larger model (that was the concatenation of these two smaller models) from the outset. In the latter case, there would then be no emergence.
The point being that emergence has a sense of mystery only to the extent that we maintain a belief that a single type of system description is always adequate over the evolution of a system. This kind of belief is another example of an assumption that effective process can be equated with algorithmic construction (and therefore, with a single model). Instead, however, the physical world is what we seek to model, and therefore, our modeling paradigms are obliged to consider the processes in the physical world as the actual domain of effective processes. From this perspective, it becomes natural for our modeling efforts to track those effective processes and to transition between models fluently, in accordance with the system’s transitions between sets of states and observables. 
References & Footnotes
AS: Rosen, R. 1985. Anticipatory Systems. Pergamon Press
EL: Rosen, R. 1998. Essays on Life Itself. Columbia University Press
FM: Rosen, R. 1978. Fundamentals of Measurement and Representation of Natural Systems. Elsevier Science
LI: Rosen, R. 1991. Life Itself. Columbia University Press