I am working to develop expertise in Lean Systems Engineering, Massively-Collaborative Systems, Artificial Intelligence and Creative Writing (Speculative Fiction). I want to use this page to organize content that I find relevant to these pursuits or that I am otherwise interested in for various reasons.

Research Questions edit

  1. Why does it appear that Systems Engineering is devoid of scientifically verifiable theories?
  2. How does one describe any arbitrary model?
  3. What features must a system have in order to effectively describe any arbitrary model, specifically its rules, subdomains and patterns?
  4. How does one analyze any arbitrary, undefined model?
  5. How does one compare any two arbitrary models (hypotheses, functions)?
  6. How does one utilize biased knowledge of extant models to inform the creation of a new model? Is this analogous to visualization?
  7. How does one research for new models?
  8. How does one modify a model methodically without violating certain invariants? What invariants are relevant for effective modification of a model given the purpose / context of the modification?

How does one describe any arbitrary model? edit

I was once motivated to find certain prototypical patterns which can be used as the basis for defining pattern-ness. I've now come to realize that such a distinction, between pattern and non-pattern would be rather arbitrary and focusing on this would detract from the goal of a generalizable pattern detector - an intelligent entity.

In the stead of defining pattern-ness, I now have the belief that such a decision problem must be left as a decision problem for the entity to decide, according its own bias and experience. The problem then becomes, how does one synthesize an effective system that can define and analyze arbitrary models, and use it's past experiences and models (collectively biases) to inform the creation and selection of further models, within some confine of operational stability.

In this context, I want to define a model as any hypothetical function that is defined is such a way that distinguishes certain properties of the function, making the function amenable to analysis. Where, a function is any (possibly uncountably infinite) set of rules that map elements from one (possibly uncountably infinite) set into another (possibly countably infinite) set. Now, I want to define how to describe (write down) any arbitrary model; the caveat here is that the model may be an uncountably infinite set of rules but I must use a finite amount of space and time for writing down it's description.

So, how does one write down any arbitrary model? It seems that one first counter-question I would have, if asked to do such a task is: "Is there a purpose for this function? Or is it just a set of rules?" In computational terms, I think this is asking if the function is compressible (and if so, how) or if it is arbitrary (random, incompressible)? This distinction between functions that are compressible and incompressible functions seems foundational. Further, there is the realization that certain functions are partially compressible (its behavior over a certain subset of the domain is compressible). I believe it may be useful to assume all functions are partially compressible and then in describing the function, I must:

  1. define the unknown domain as incompressible, do not enumerate!
  2. enumerate the known elements of the codomain corresponding to the subdomains of incompressibility
  3. describe the patterns which define the subdomains of compressibility (what does a subdomain description look like? what does a pattern description look like?)

In conclusion, I believe that one should define any arbitrary model as a (possibly compressible) set of arbitrary rules for mapping elements from one set (domain) into another set (codomain), effectively acting as an analyzable hypothetical function. Note: The qualification "possibly compressible" (analyzable) is not a necessary quality, but I find it useful for my goals and it does not appear to impose a restriction on the generality of the statement.

The impending challenge I will focus on will be to investigate methods to describe the domain (subdomains) and patterns of a model.

Side note: I believe that I may eventually find that the theoretical limitations are not such a hindrance to implementing useful AGI. I believe that the major challenges are actually Optimization (Speed, Storage), Decision Making (Bias, Style), and Safety (It may be challenging to monitor and control the behavior of an independent agent).

Side note: I have an intuition that I should admit the possibility of imprecision: I fear that it may not be possible to accomplish the task as stated due to limits observed in computability theory. As such, I think one minor change that may make this amenable to solution is relaxing the precision / accuracy of my description of the arbitrary model. Unfortunately, this is a Gordian knot-style solution - accomplishing a stated goal by removing (or ignoring) an implicit restriction (which was not explicitly stated in the problem definition). As such, any solution that is a direct result of this clarification of the original problem should expressly note this clarification as contextually important.

What features must a system have in order to effectively describe any arbitrary model, specifically its rules, subdomains and patterns? edit

Trivial Answer: A system must have effective subsystems for encoding (storing), associating and identifying (retrieving) the rules, subdomains and patterns that define an arbitrary model in order to effectively describe that arbitrary model.

  1. The ability to measure certain properties about itself? If so, which properties?
  2. The initial state of the system may be such that it has some predefined models, but it must be free to eliminate any and incorporate any new ones.

How does one analyze any arbitrary, undefined model? edit

If one hopes to take advantage of possible compressibility or optimize for speed and space, one may want to avoid attempting to accurately analyze arbitrary unknown models directly. (I think that computability theory indicates that optimal and effective approaches to general problem spaces may not exist.) Instead, one may want to utilize heuristics (general rules), biases (extant models) and context (system state) to perform a contrived (non-general) evaluation of an arbitrary, undefined model (hidden, unknown, and potentially pathological, random, or non-deterministic).

The general problem with attempting to describe undefined functions is that we are generally only given the outputs (range) and we may or may not be given some set of possible inputs which may or may not have a direct effect on the particular output; with a further implication that we don't know which subsets of the domain allow the function to be amenable to compression. Although the forward problem (using a defined function to produce outputs) is directly amenable to deterministic computation, the inverse problem (using outputs and inputs to define a function) has so far proven elusive.

According to steps 1 and 2 of the model definition process given above, one naive way to deal with an undefined model is simply to encode its inputs and outputs, and ignoring all unobserved combinations of inputs and outputs.

It seems that contemporary methods attempt to match subsets of the input/output set against subsets of the input/ouput sets of some general rules (heuristics), then using the matches / misses to inform either identification or generation of a descriptive / analytical model. Such methods often utilize non-deterministic algorithms in order to avoid certain pitfalls of recursively analyzing io-subsets.

Intuition leads me to believe that a further improvement may be to incorporate extant analytical models (biases) and system state (context). Additionally, instead of just pattern-matching io-subsets,

Some thoughts about Property-Model Equivalence Theory (to be clarified and explained later):

  • A proto-set is a fundamental prototype for a set.
  • A set is a proto-set that is defined by its AssociationsWith other sets.
    • I want to identify if context plays a role here and, if so, how.
    • What is an example of how context might influence the measure of association or definition of a set?
    • what is context?
  • AssociatedWith is a fundamental metric which accepts two sets as input and returns a measure.
  • A probability distribution is a set of ?
  • A domain is a set of valid inputs
  • A rule is a probability distribution that is AssociatedWith a domain
  • A model is a set of rules that is associated with a domain
  • A property is a model that is AssociatedWith a set. (NOTE: This is a REALLY important assertion!!!)
  • Some possible special rules associated with sets:
    • IsSubSetOf is a rule that is AssociatedWith a set, accepts one argument (another set), and returns a measure (that describes how well the argument contains the right properties and values of those properties to belong to the set), null or undefined
    • IsSuperSetOf
    • Binding (or instantiation) is a rule which accepts only one specific value as its argument and returns only one value or null
    • Nothing is a rule which accepts any value as its argument and returns only undefined
  • Some possible special properties associated with sets:
    • Fundamental-ness (General-ness, Beauty?, Intuitiveness?) - Don't I need a context for this?
    • Belief-ness - Don't I need a context for this?
    • Truth-ness - Don't I need a context for this?
  • Some possible special properties associated with models:
    • IsInstance (or IsObservation): An instance is a model that has and can have only one rule that is a binding
    • isClass: A class is a model that cannot be an instance
  • Some special rules associated with models:
    • IsInstanceOf: Can be implemented as an extension of IsSubSetOf
    • IsClassOf: Can be implemented as an extension of IsSuperSetOf
    • IsSelf

New, open questions edit

  1. Why are biases a useful way of informing priors? See Abductive reasoning.
  2. How does one utilize heuristics, biases and context to inform decisions when analyzing an undefined model?
  3. Does one need to incorporate the notion of classes of rules rather than just patterned (statistical) vs un-patterned (instance) rules? My current intuition is no.
  4. What is a set?
  5. What is a measure?
  6. What is a domain?
  7. What is a codomain?
  8. What is a value?
  9. can one use "intuition" to decide if an element belongs in a set (or equivalently to define the set)? In this sense intuition refers to some measure of the fundamental-ness (general-ness) of the model which may be related to the cost of incorporating a model representing the definition of the set and that model's belonging-ness to the set into the existing collection of models.
  10. The properties of models should also be models, meaning amenable to analysis.
  11. The set of properties that define a model are not the same thing as that model.
  12. Morphism
  13. Is the foundation of Set Theory also the foundation of Intelligence and logic?

How does one compare any two arbitrary models (hypotheses, functions)? edit

I'm also tempted to add the hypothesis that such a system will be able to evaluate general datasets (and types) better than random guessing. But I'll present a more reserved version of this: such a system, when effectively coupled with a logical reasoning system will enable the realization of an adaptive agent that appears to reason generally (to humans) better than existing systems.

Effectively polling the existing model space for bias for informing intuition may not be a trivial process: The system may make better intuitive decisions if it selects some subset of model space for using for seeding intuition; however in order to decide which elements belong in that subset, it may need to invoke intuition and in deciding when to terminate, it may also need to invoke intuition. Instead of this potentially recursive process, it may be simpler to simply perform an arbitrary nearest first evaluation of extant models for analyzing the potential cost (? Bayesian expected loss ? ). All models may need to be described by the properties of fundamental-ness (general-ness), degree of contextual belief and degree of universal truth.

Some possible hypotheses about a system designed to effectively utilize recursive set definitions (self-referential modelling), such a system will:

  1. adapt to changes in its environment better than a system which does not employ recursive set definitions
  2. reject noise, after training/adaptation, better than a system which does not employ self-referential modelling
  3. be able to ask questions
  4. be able to communicate using analogies
  5. be able to explain it's beliefs and degrees of beliefs
  6. be able to discover novel and non-trivial solutions to problems
  7. be able to ask poignant, exploratory questions to advance its learning

Why does it appear that Systems Engineering is devoid of scientifically verifiable theories? edit

The common goal of Systems Engineering is to clarify customer requirements and technical restrictions, then prepare a clear specification for the acquisition and operation of a system and give certain guarantees (assurances) to the customers in order to help the customer minimize risk and cost (and time) overruns for complicated (usually precedented) systems. Unfortunately, after 2 weeks in an introductory course, I feel that much of the discipline seems focused on the rigorous application of best practices, with very little concern for first principles; which is very similar to an observation I have made about much of Engineering in my professional experience.

In systems engineering, there is a great deal of effort in espousing useful protocols that work well in practice for the establishment of complex systems in a way that minimizes risk (first and foremost) and also reduces costs. The protocols that tend to work well in practice and are easy to use then become accepted by institutions, thereby becoming doctrine. Ultimately, this practice works well enough and has helped to improve and enable incredibly complex systems worldwide.

Given the success of engineering practice and the fact that human intuition enables us to effectively reason abstractly, hand-waiving seems fully acceptable, expedient and effective. As such, I think that we should embrace this gift of intuition and utilize it extensively. However, I do not think we should allow any conjectures to remain without incrementally rigorous analysis for very long; a continuous effort should be made to investigate biases, assumptions and conjectures in order to minimizes systematic risks and wastes in the practice of engineering.

Unfortunately, the doctrines (protocols, traditions) utilized for the practice of systems engineering may have become so in-grained that they are not subject to continuous improvement and incremental formalism. The practice seems fraught with unverified hypotheses, traditions, hand-waiving and, ultimately, waste; which may be the same challenge for all Engineering practices. We should not accept this unfortunate reality. I believe it is important for Mathematics and the sciences to be able to rigorously demonstrate the validity of theories and enumerate extant conjectures, assumptions and axioms in the context of their respective arts.

My goal is to apply incrementally rigorous formalism to everything that I do, including Engineering. As such, I seek a set of theories that will provide clear and verifiable predictions about the properties, strengths and limitations of Systems Engineering practices. I believe that such theories can be realized by verifying conjectures (including both extant protocols and new ideas) using rigorous experimentation and progressively increasing formalism. I believe this increasing formalism will help make the practice of Systems Engineering incrementally lean, effective, efficient, proactive, and provably cost-effective, with certain guarantees.

Notes edit

Interesting Reads edit