360博彩通-大发888卡-老虎机作弊器手机软件

研究院聯(lián)合研究團(tuán)隊(duì)的最新研究成果被中科院Top期刊《Neurocomputing》錄用

發(fā)布者:張鈺歆發(fā)布時(shí)間:2022-04-04瀏覽次數(shù):1541

由上海對(duì)外經(jīng)貿(mào)大學(xué)人工智能與變革管理研究院與北京郵電大學(xué)、華東師范大學(xué)計(jì)算機(jī)科學(xué)與技術(shù)學(xué)院聯(lián)合研發(fā)團(tuán)隊(duì)成員劉峰*、王晗陽(yáng)(碩士研究生)、張嘉淏(本科生)、付子旺(碩士研究生)、周愛(ài)民*、齊佳音、李志斌合作撰寫(xiě)的學(xué)術(shù)論文“EvoGAN: An Evolutionary Computation Assisted GAN”被SCI檢索期刊《Neurocomputing》(H-index=110,Impact=5.719)錄用,該期刊是中科院SCI分區(qū)升級(jí)版的TOP期刊,主要發(fā)表人工神經(jīng)網(wǎng)絡(luò)、神經(jīng)生物學(xué)、模糊邏輯學(xué)、腦與認(rèn)知科學(xué)、機(jī)器學(xué)習(xí)、模式識(shí)別等領(lǐng)域的一流研究成果。

圖像生成技術(shù)已被廣泛應(yīng)用于多個(gè)領(lǐng)域,近年,得益于生成對(duì)抗網(wǎng)絡(luò)(Generative adversarial network, GAN)在理論與模型研究方面的發(fā)展,僅通過(guò)一張圖像就能夠?qū)崿F(xiàn)對(duì)人類(lèi)情感的識(shí)別與計(jì)算,生成以假亂真的面部表情。然而,當(dāng)前的各類(lèi)模型仍存在一定的局限性——它們只能生成具有基本表情的圖像,或模仿一個(gè)表情,而不是生成復(fù)合表情。在現(xiàn)實(shí)生活中,人類(lèi)的表情通常具有很大的多樣性和復(fù)雜性。“EvoGAN: An Evolutionary Computation Assisted GAN”提出了一種進(jìn)化算法(EA)輔助的對(duì)抗神經(jīng)網(wǎng)絡(luò)“EvoGAN”。EvoGAN使用EA在GAN學(xué)習(xí)的數(shù)據(jù)分布中搜索目標(biāo)結(jié)果,可用于生成具有任何精確目標(biāo)的復(fù)合表情圖像,進(jìn)一步,通過(guò)識(shí)別該合成圖像的表情,實(shí)現(xiàn)了任意精確情緒的面部復(fù)合表達(dá),即情緒面容輸出。值得一提的是,EvoGAN是最早使用進(jìn)化算法在生成對(duì)抗網(wǎng)絡(luò)學(xué)習(xí)的數(shù)據(jù)分布中搜索目標(biāo)結(jié)果的方法之一。


EvoGAN: An Evolutionary Computation Assisted GAN

Abstract: 

The image synthesis technique is relatively well established which can generate facial images that are indistinguishable even by human beings. However, all of these approaches uses gradients to condition the output, resulting in the outputting the same image with the same input. Also, they can only generate images with basic expression or mimic an expression instead of generating compound expression. In real life, however, human expressions are of great diversity and complexity. In this paper, we propose an evolutionary algorithm (EA) assisted GAN, named EvoGAN, to generate various compound expressions with any accurate target compound expression. EvoGAN uses an EA to search target results in the data distribution learned by GAN. Specifically, we use the Facial Action Coding System (FACS) as the encoding of an EA and use a pre-trained GAN to generate human facial images, and then use a pretrained classifier to recognize the expression composition of the synthesized images as the fitness function to guide the search of the EA. Combined random searching algorithm, various images with the target expression can be easily sythesized. Quantitative and Qualitative results are presented on several compound expressions, and the experimental results demonstrate the feasibility and the potential of EvoGAN. The source code is available at https://github.com/faceeyes/EvoGAN.

Keywords: Evolutionary algorithms; GAN; facial expression synthesis