• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • I’m not sure I’d trust modern CA to do Med3 justice. The new style of Total War is just a different beast from the sublime RTW/Med2 era.

    Lots of little things changed, and it just ‘hits different’. Probably the biggest difference is just that every single fight after the first 20 turns will be a 20 stack vs a 20 stack, and every single battle is life or death for that army. It makes the campaign much faster paced - declare war, wipe stack, capture cities for 3 turns until the AI magics up another 20 stack.

    In the original Med2, since there wasn’t automatic replenishment, there were often battles between smaller stacks, even in late game, as they were sent from the backline to reinforce the large armies on the front. Led to some of my greatest memories trying to keep some random crossbowmen and cavalry alive against some ambushing enemy infantry they wandered into. The need for manual reinforcement led to natural pauses in wars and gave the losing side a chance to regroup without relying on the insane AI bonuses of the modern TW games - and I do mean insane; they’ll have multiple full stacks supplied from a single settlement.














  • Explaining what happens in a neural net is trivial. All they do is approximate (generally) nonlinear functions with a long series of multiplications and some rectification operations.

    That isn’t the hard part, you can track all of the math at each step.

    The hard part is stating a simple explanation for the semantic meaning of each operation.

    When a human solves a problem, we like to think that it occurs in discrete steps with simple goals: “First I will draw a diagram and put in the known information, then I will write the governing equations, then simplify them for the physics of the problem”, and so on.

    Neural nets don’t appear to solve problems that way, each atomic operation does not have that semantic meaning. That is the root of all the reporting about how they are such ‘black boxes’ and researchers ‘don’t understand’ how they work.


  • They aren’t the good guys. A lot (too much if you ask the community) of the fiction is told from the perspective of the imperium/space Marines, but that doesn’t make them the good guys.

    They go around saying things like “The rewards of tolerance are treachery and betrayal.” They clearly are not meant to be the good guys, even in their own stories.

    The problem is media literacy is so poor that far too many people look at quotes like that and think “that’s a good point”. Even the creators have put out press releases about how all the fascists are missing the point.






  • In the language of classical probability theory: the models learn the probability distribution of words in language from their training data, and then approximate this distribution using their parameters and network structure.

    When given a prompt, they then calculate the conditional probabilities of the next word, given the words they have already seen, and sample from that space.

    It is a rather simple idea, all of the complexity comes from trying to give the high-dimensional vector operations (that it is doing to calculate conditional probabilities) a human meaning.