I always look at 'fit' before feasibility. You can always par a goal down, work around limitations, and so forth.
How do you eat an elephant?
ETAs have always been a bit of a joke. I've met them only on very rare occasion. And timetables don't make for good drinking mates.
Good data representations I've always found, have three things in common.
1) Are often simple, but not the simplest solution-- both to implement, and understand
2) Offer an easy route to a more flexible implementation down the line.
3) Follow best practices, i.e. single responsibility, open/closed, interface segregation, i.e. SOLID.
For example, for the game I'm using stencyl. Stencyl has scenes, and actors, which can have different types of events and attributes or 'variables' attached to them. You can also attach behaviors to them which encapsulate state as attributes, and logic as events.
The individual instances of each behavior attached to a given actor type, can have their instantiation values configured through the behavior screen. It's just a set of forms, one for each attribute that the behavior contains, along with a description of each.
With the fsm behavior I created, theres a list attribute. Each entry is a comma delimited string. I attach a copy of the fsm to an actor type, say actor 'walker', and then I go into behavior configuration and create a new entry in the 'state list' attribute. This new entry is broken down into 'stateName' (the first state in the list is the starting state), and then into pairs 'transitionName, newState'.
At each step the game manager calls a .tick event for all the behaviors of all actors in the scene, including the fsm.
The fsm checks what state it is currently in, based on a 'current state' attribute, and then looks up the index of that in the state list.
it then breaks the string at the given index into a new list. Now here is where encapsulation comes in. Each 'state' is just a behavior, that holds all the messy details. Each transition is also a behavior, with a common custom event name 'checkConstraint'.
The fsm loops through the state string for the given state stored in the state list, and then raises a checkConstraint event with each named
transition behavior in the state string, i.e. 'startingState, someTransitionBehavior1, someNewState1, someTransition2, etc..'
One after another, until one of them returns true. When one of the transitions behaviors returns true, the next thing fsm does is call
a custom event common to all state behaviors, 'doState' on the state behavior paired with the given transition in the state string.
The new behavior state then pushes itself onto a state stack. The new behavior performs some code related to the state on the next tick, and the whole process repeats.
If no transition succeeds, the current state is popped from the stack, and we return to the previous state. If we reach the last state and no transition succeeds, or if there are no more states to pop from the stack, then the fsm ends, and the whole thing starts over.
Some of that may have been a little dense to read A lot of what I've learned is really just standard low level shite. AI can be fun to tool around with.
Zaimoni, if you're interested in AI too, I'd recommend reading
Game AI Pro they have their previous version of the book listed out in chapters, and
most of it is fantastic. Now if you prefer well written, step by step sort of articles, that
go into detail without drowning you in it, and provide tons
of visual examples then I can point you in no other direction than
www.redblobgames.com/
the author of whom is a veritable
scholar in all but name. Really brilliant stuff that. You have to explore his site though, because while on the surface Red Blob has some fascinating work, even below that there are articles so good
I would pay for them if they were not free and I, not so poor.
Really, if you're interested in AI, you'd be doing yourself a disservice not to read those two sites.
For a general overview of AI, just because of the sheer volume of pages (38 at last count), take a glance at
AI Game Devalthough you were probably already aware of that site. Similar to the last one,
Gamasutra has more than a few
excellent and very thorough articles that go into
a breath taking depth, but you have to dig
deep to find them. The guys at TIS for example, the ones who are making project zomboid, did an article on behavior trees that is marvelous and the sort of stuff you'd see in beginners diy books.
On the topic of directives, subgoals and so forth, theres no general agreement on what works best. In practice behavior trees are the industry go to, but always seem to me like
overkill in every sense of the word, real powerful stuff, real flexible, expandable, easy to understand, only moderately difficult to implement.
If you're looking for emergence, or novelty many games have accomplished that with multiple, interacting simple finite state machines. A lot of non-linear, suprising things that
appear uncanny, or even boarding on intelligent can happen when you create several entities, each with their own
simple set of goals and behaviors, and allow them to interact with or modify their environment, as well as react to elements of that environment. For example, take pacman. Suppose ghosts
always avoided big pellets. Suppose ghosts are attracted to
fruit, imagine rather than eating them, pacman can push them around the maze in front of him. Accidentally you would create the exact same mechanics and
more. And all because some programmer forgot to set a bit flag, or didn't set the friction high enough so that certain level elements could be unintentionally moved.
Anyway, enjoy the reading. Just don't get stuck reading forever. I know from experience.