Recent Submissions

  • MoStGAN: Video Generation with Temporal Motion Styles

    Shen, Xiaoqian (2022-12-06) [Poster]
    Video generation remains a challenging task due to spatiotemporal complexity and the requirement of synthesizing diverse motions with temporal consistency. Previous works attempt to generate videos in arbitrary lengths either in an autoregressive manner or regarding time as a continuous signal. However, they struggle to synthesize detailed and diverse motions with temporal coherence and tend to generate repetitive scenes after a few time steps. In this work, we argue that a single time-agnostic latent vector of style-based generator is insufficient to model various and temporally-consistent motions. Hence, we introduce additional time-dependent motion styles to model diverse motion patterns. In addition, a extbf{Mo}tion extbf{St}yle extbf{Att}ention modulation mechanism, dubbed as MoStAtt, is proposed to augment frames with vivid dynamics for each specific scale (i.e., layer), which assigns attention score for each motion style w.r.t deconvolution filter weights in the target synthesis layer and softly attends different motion styles for weight modulation. Experimental results show our model achieves state-of-the-art performance on four unconditional $256^2$ video synthesis benchmarks trained with only 3 frames per clip and produces better qualitative results with respect to dynamic motions.
  • Learning The Rules of Minihack Environment Using DreamerV2

    Alkhayat, Hussain (2022-12-06) [Poster]
    DreamerV2 is proven to be able to simulate a visually-dominated environments such as Atari games. However, is it able to simulate environments with hidden rules that are not visual? In this paper I will answer if the world-model of dreamerv2 is capable of learning non-visual rules of the environment minihack.