HASVERS'S PROFILE

Exeunt Omnes
A game of strategic sophistry. Convince or crush the teenage girl who wants to end your reign of evil.

Search

Filter

Quad Pro Quo: Gameplay Footage

Haha yea you probably need to combine these weights with those that you put in there yourself as a protection against sheer stupidity.
Else, just try 10 times more simulations and see what happens :P

Edit: Just read your edit, that's cool!
By the way, I was wondering: do you have combos like in TT where if you take one card, it takes the cards that that card took?(if that makes sense)
Doesn't look like it from the video, so I'm curious as to whether you prefer to avoid that.

Quad Pro Quo: Gameplay Footage

Interesting, thanks for the wip report and please do cotninue!
I agree with weighting the closeness; one easy way of varying that continuously is taking a sigmoid weight 1/(1+exp(-k*dp)) where dp is the point difference in favour of the AI and k is the steepness of the transition, to see if e.g. k=1 does better than high k (of course k close to 0 will be absurd). There are other things that may have to enter the weight (e.g. something like action potential, i.e. how many options for acting the AI had all along the match), but I'd have to think more about the game mechanics to make relevant suggestions.

For the beginning, it's unfortunately no big surprise - chess and go players have to learn openings from specialized books, and likewise you probably have to guide the AI a little. Or bake in good openings by running much more extensive MC on starting positions and selecting those that seem best.

Quad Pro Quo: Gameplay Footage

We can definitely have a judge-prepared game as the main arena for AIs, though I wouldn't discourage people from making AIs for their own games or a friend's as part of the contest (I'm afraid we wouldn't have that many people else, plus that facilitates inviting people from other gammak spheres). If we go the TT route though, I'm sure we can pester kentona for a badge for anyone who posts something AI-related during the event, to keep that as a side-benefit.

And I agree with Merlandese that it's a good way to get cheap labour, for this or some other idea you wanna try out someday :P For fun we could encourage creativity by having prizes for special categories - best genetic algorithm and so on - in case some people turn out to be CS-y enough, though it doesn't have to be heavy programming.

Quad Pro Quo: Gameplay Footage

Yeah definitely, reading someone else's code is very hard even if you know the language/engine well.

Which is why I was thinking of just letting people work on an AI for a game they submitted themselves if they prefer - like, you submit a full-featured demo of a battle system, but it can be taken from your current project. It could motivate people to come up with improved AIs for a game they already made, too (as long as they document the starting point and changes).

The "competitive AI" prize would be the only incentive to make an AI for someone else's game, to test it against theirs and others. Maybe no one would attempt that, which just means that I get to keep the prize :P
(it's easier if the person who made the system actually made a simple interface between the battle system and the AI, so the AI code only has e.g. to pick moves from a list that is given to it, and the battle system returns what happened. This is part of good code design anyway, so it could be among the recommendations on the event page, though not mandatory)


(NB: repeatedly edited because man my thoughts are fluid tonight)

Quad Pro Quo: Gameplay Footage

author=Deltree
Sounds like an event in the making to me!

There's an idea! I had fun doing the RPGology thing and I'd be willing to do another, with a bit more publicity to have more participation. To avoid further derailing, an idea in spoiler tags, and if you guys are interested to any extent I guess we can discuss it further via MP or somewhere else.
To make an event that can boost productivity on real projects while allowing others to get involved, I'm thinking something where:
- before the event proper, people (contestants and others alike, including judges) can submit a fully working battle system/tactical minigame of some sort, with the codebase open. Maybe we filter out those that are not viable.
- then, contestants can choose any of the systems (including their own) and make an AI for it + an explanation of how it works. Prizes for technical achievement, best feel, and competitive AI when/if multiple ones are developed for the same battle system.
- there's a badge for people who post articles on their AI design during the contest (whether the articles are related to contest entries or not), and a prize for best exposition.


Also, have fun during your vacation!

Quad Pro Quo: Gameplay Footage

author=Deltree
Right now, it's weighting things in a literal sense, including biasing corners and protecting "weak" edges, but that could be me overthinking the strategy in some ways while potentially missing something obvious!

Damn, this is hard.

Yeah unfortunately there is no real getting around this - Monte Carlo is pretty good at mimicking "strategy", since it creates a global sense of whether a move is good or bad in the long term, but it is bad at the tactical busywork of exploiting local weaknesses and the like, i.e. finding the one sequence of moves that can resolve a small-scale situation. (which is why MC was not necessary for chess, but is for go)

For that, nothing beats expert knowledge. At most, you can delegate - export the battle mechanics into a small program to run on the side, and pit AIs against each other to let them evolve better weights by natural selection, then see what is left after 10000 generations :P Of course there are lots of pitfalls, so not always worth the effort. Except for being able to say that you simulate robo-deathmatches as a hobby.

author=Gibmaker
Unfortunately RGSS is not a fast language so brute-force paradigms aren't really an option.
Ahh yeah that's a pretty darn good point to consider, I've never tried to do actual computations in RM.

author=Merlandese
I wonder if it would work if the computer used Monte Carlo under the assumption that its opponent had the exact same hand the AI has
Maybe pretty well actually; though I think it's better for the AI to assume that you have the same deck instead - i.e. play out the Monte Carlo part as if both of you were playing cards at random from its own deck (after the move that it wants to test, of course).

The even simpler assumption is that the opponent can just play any card in the game at random; MC is largely about getting quick'n'dirty estimates so it's fine. But in the previous case, the AI can be "surprised" if you use a card it has never seen.


Heh, maybe we should write an article or forum topic on the AI tricks that worked for each of us, so that future generations learn from our suffering.

Quad Pro Quo: Gameplay Footage

A note inspired by Merlandese's comment, not necessarily for applying here:

If you want your AI to be smart at very low implementation cost, Monte Carlo methods are depressingly effective: for every possible move it can do, have the AI compute what happens if it chooses that move, and then random moves are played by both players until the end of the match. Do that a couple hundred times for each move (it's pretty much instantaneous) and select the one that leads to the largest fraction of wins.

That's much faster than exploring trees of successive moves beyond 2 or 3 steps, and it's a significant part of how modern Go algorithms rocketed from bad amateur to dan level (with a tiny bit more effort of course)

Of course, you can still make it fallible afterwards - interestingly, it is easy to give it a "personality" by strongly biasing the distribution of moves it knows to do and the one it thinks you will do.

Tallest-Reed

First day buy. Will play soon, I hope the entire game is in fact a thesis on the intricacies of post-Dwemer quantum theology.

screen1.png

Those should really be cliffracers.