-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathsubmit.html
310 lines (304 loc) · 16.3 KB
/
submit.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
<!doctype html>
<html>
<head>
<title>MMM2024</title>
<meta charset="utf-8" name="viewport" content="width=device-width, initial-scale=1">
<link href="https://use.fontawesome.com/releases/v5.2.0/css/all.css" media="screen" rel="stylesheet" type="text/css" />
<link href="css/frame.css" media="screen" rel="stylesheet" type="text/css" />
<link href="css/controls.css" media="screen" rel="stylesheet" type="text/css" />
<link href="css/custom.css" media="screen" rel="stylesheet" type="text/css" />
<link href='https://fonts.googleapis.com/css?family=Open+Sans:400,700' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700' rel='stylesheet' type='text/css'>
<link href="https://fonts.googleapis.com/css?family=Source+Sans+Pro:400,700" rel="stylesheet">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="js/menu.js"></script>
<script src="js/footer.js"></script>
<style>
.menu-submit {
color: rgb(0, 0, 0) !important;
opacity: 1 !important;
font-weight: 700 !important;
}
</style>
</head>
<body>
<div class="menu-container"></div>
<div class="content-container">
<div class="banner" style="background: url('img/feature.jpeg') no-repeat center; background-size: cover; height: 450px;">
<div class="banner-table flex-column" style="background-color: rgba(0, 0, 0, 0.5);">
<div class="flex-row">
<div class="flex-item flex-column">
<h1 class="add-top-margin-small">Calls for Contributions</h1>
</div>
</div>
</div>
</div>
<div class="banner" id="toi">
<div class="banner-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column">
<h2 class="add-top-margin-small">Topics of Interest</h2>
</div>
</div>
</div>
</div>
<div class="content">
<div class="content-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column full-width">
<h3>Multimedia Content Analysis</h3>
<hr>
<ul>
<li>Multimedia indexing</li>
<li>Multimedia mining</li>
<li>Multimedia abstraction and summarisation</li>
<li>Multimedia annotation, tagging and recommendation</li>
<li>Multimodal analysis for retrieval applications</li>
<li>Semantic analysis of multimedia and contextual data</li>
<li>Interactive learning</li>
<li>Multimedia knowledge acquisition and construction</li>
<li>Multimedia verification</li>
<li>Multimedia fusion methods</li>
<li>Multimedia content generation</li>
</ul>
<h3>Multimedia Signal Processing and Communications</h3>
<hr>
<ul>
<li>Media representation and algorithms</li>
<li>Multimedia sensors and interaction modes</li>
<li>Multimedia privacy, security and content protection</li>
<li>Multimedia standards and related issues</li>
<li>Multimedia databases, query processing, and scalability</li>
<li>Multimedia content delivery, transport and streaming</li>
<li>Wireless and mobile multimedia networking</li>
<li>Sensor networks (video surveillance, distributed systems)</li>
<li>Audio, image, video processing, coding and compression</li>
<li>Multi-camera and multi-view systems</li>
</ul>
<h3>Multimedia Applications, Interfaces and Services</h3>
<hr>
<ul>
<li>Media content retrieval, browsing and recommendation tools</li>
<li>Extended reality (AR/VR/MR) and virtual environments</li>
<li>Real-time and interactive multimedia applications</li>
<li>Multimedia analytics applications</li>
<li>Egocentric, wearable and personal multimedia</li>
<li>Urban and satellite multimedia</li>
<li>Mobile multimedia applications</li>
<li>Question answering, multimodal conversational AI and hybrid intelligence</li>
<li>Multimedia authoring and personalisation</li>
<li>Cultural, educational and social multimedia applications</li>
<li>Multimedia for e-health and medical applications</li>
</ul>
<h3>Ethical, Legal and Societal Aspects of Multimedia</h3>
<hr>
<ul>
<li>Fairness, accountability, transparency and ethics in multimedia modeling</li>
<li>Environmental footprint of multimedia modeling</li>
<li>Large multimedia models and LLMs</li>
<li>Multimodal pretraining and representation learning</li>
<li>Reproducibility, interpretability, explainability and robustness </li>
<li>Embodied multimodal applications and tasks</li>
<li>Responsible multimedia modeling and learning</li>
<li>Legal and ethical aspects of multimodal generative AI</li>
<li>Multimedia research valorisation</li>
<li>Digital transformation</li>
</ul>
</div>
</div>
</div>
</div>
<div class="banner" id="regular">
<div class="banner-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column">
<h2 class="add-top-margin-small">Regular Papers</h2>
</div>
</div>
</div>
</div>
<div class="content">
<div class="content-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column full-width">
<p class="text">
MMM is a leading international conference for researchers and industry practitioners for sharing new ideas, original research results and practical development experiences from all MMM related areas.
The conference calls for research papers reporting original investigation results and demonstrations reporting novel and compelling applications.
</p>
<p class="text">
The proceedings of previous editions of MMM can be found <a href="https://link.springer.com/conference/mmm">here</a>.
</p>
</div>
</div>
</div>
</div>
<div class="banner" id="demo">
<div class="banner-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column">
<h2 class="add-top-margin-small">Demonstrations</h2>
</div>
</div>
</div>
</div>
<div class="content">
<div class="content-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column full-width">
<p class="text">
MMM 2024 calls for submissions reporting novel and compelling demonstrations of MMM related technologies, in all areas listed in the call for (regular) papers. All kinds of demonstrations of working systems, prototypes, or proof-of-concepts that demonstrate new solutions, interesting ideas, or new applications of multimedia systems and applications are welcome.
</p>
<p class="text">
Demonstration paper submissions have specific requirements for length, content, and supporting materials that should be submitted. Please check the <a href="authors.html#demo">submission guidelines</a> for details.
</p>
</div>
</div>
</div>
</div>
<div class="banner" id="brave">
<div class="banner-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column">
<h2 class="add-top-margin-small">Brave New Ideas</h2>
</div>
</div>
</div>
</div>
<div class="content">
<div class="content-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column full-width">
<p class="text">
The Brave New Ideas track of MMM 2024 is calling for papers that suggest
new opportunities and challenges in the general domain of multimedia
analytics and modelling. A BNI paper is expected to stimulate activity
towards addressing new, long term challenges of interest to the
multimedia modelling community. The papers should address topics with a
clear potential for high societal impact; authors should be able to
argue that their proposal is important to solving problems, to
supporting new perspectives, or to providing services that positively
impact on people. Note that is not necessary that papers in this track
have large-scale experimental results or comparisons to the state of the
art, since it is expected that large, publicly available datasets may
not be available, and there may be no existing approaches to which the
proposed approach in the paper can be compared.
</p>
<p class="text">
BNI papers should adhere
to the same formatting guidelines and page limits as the Regular and
Special Session papers.
</p>
</div>
</div>
</div>
</div>
<div class="banner" id="special">
<div class="banner-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column">
<h2 class="add-top-margin-small">Special Session Papers</h2>
</div>
</div>
</div>
</div>
<div class="content">
<div class="content-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column full-width">
<p class="text">
Special session papers must follow <a href="guidelines.html">the same guidelines</a> as regular research papers with respect to restrictions on formatting, length, and double-blind reviews.
Only MDRE papers will undergo single-blind review, and authors will not have to anonymize their MDRE papers because of the inherent difficulty of doing so for open datasets.
</p>
<ul>
<li>
<a href="specialpaper.html#s1">MDRE: Multimedia Datasets for Repeatable Experimentation.</a>
This special session focuses on sharing data and code to allow other researchers to replicate research results, with a long term goal of improving the performance of systems and the reproducibility of published papers.
</li>
<li>
<a href="specialpaper.html#s2">MOMST: Multi-Object Multi-Sensor Tracking.</a>
This special session addresses the challenging problem of multi-object multi-sensor tracking in computer vision and machine learning, essential in applications such as surveillance systems, autonomous vehicles, and robotics.
</li>
<li>
<a href="specialpaper.html#s3">MARGeM: Multimodal Analytics and Retrieval of Georeferenced Multimedia.</a>
This special session focuses on multimodal analytics and retrieval techniques for georeferenced multimedia data, addressing challenges in lifelog computing, urban computing, satellite computing, and earth observation
</li>
<li>
<a href="specialpaper.html#s4">ICDAR: Intelligent Cross-Data Analysis and Retrieval.</a>
This special session focuses on intelligent cross-data analytics and retrieval research and to bring a smart, sustainable society to human beings.
</li>
<li>
<a href="specialpaper.html#s5">XR-MACCI: eXtended Reality and Multimedia - Advancing Content Creation and Interaction.</a>
This Special session focuses on the latest advancements in extended reality (XR) and multimedia technologies, including the development and integration of XR solutions with multimedia analysis, retrieval and processing methods.
</li>
<li>
<a href="specialpaper.html#s6">FMM: Foundation Models for Multimedia.</a>
This special session focuses on the transformative impact of Foundation Models (FMs) such as large language models (LLMs) and large vision language models (LVLMs) and explores the future directions and challenges in harnessing FMs for multimedia applications.
</li>
<li>
<a href="specialpaper.html#s7">MULTICON: Towards Multimedia and Multimodality in Conversational Systems.</a>
This special session aims to present the most recent works and applications for addressing the challenges and opportunities in developing multimedia and multimodality-enabled conversational systems and chatbots. Indicative domains of application include healthcare, education, immigration, customer service, finance and others.
</li>
<li>
<a href="specialpaper.html#s8">CultMM: Cultural AI in Multimedia.</a>
This Special session aims to bring together experts from Cultural AI and Multimedia to discuss the challenges surrounding cultural data, as well as the complexities of human culture, that require multimedia solutions.
</li>
</ul>
</div>
</div>
</div>
</div>
<div class="banner" id="vbs">
<div class="banner-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column">
<h2 class="add-top-margin-small">Video Browser Showdown</h2>
</div>
</div>
</div>
</div>
<div class="content">
<div class="content-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column full-width">
<p class="text">As in previous years, VBS 2024 will be part of the International Conference on MultiMedia Modeling
2024 (<a href="https://mmm2024.org/">MMM 2024</a>) in Amsterdam, The Netherlands, and organized as a special
side event to the Welcome Reception. It will be a moderated session where participants solve <b>Known-Item Search
(KIS)</b>, <strong>Ad-Hoc Video Search (AVS)</strong>, and <strong>Question Answering (Q/A)</strong> tasks that are
issued as live presentation of scenes of interest, either as a <b>visual clip</b>, or as a <b>textual description</b>.
The goal is to find correct segments (for KIS exactly one segment, for AVS many segments) or the correct answer (for
Q/A tasks) as fast as possible and submit the answer (for KIS and AVS: segment description – video id and frame
number) to the <a rel="noopener" href="http://videobrowsershowdown.org/call-for-papers/vbs-server/">VBS server
(DRES)</a>, which evaluates the correctness of submissions.</p>
<p class="text">
More information can be found at <a href="https://videobrowsershowdown.org/call-for-papers/">https://videobrowsershowdown.org/call-for-papers/</a>
</p>
</div>
</div>
</div>
</div>
<div class="banner" id="mediaeval">
<div class="banner-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column">
<h2 class="add-top-margin-small">MediaEval</h2>
</div>
</div>
</div>
</div>
<div class="content">
<div class="content-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column full-width">
<p class="text">
The Benchmarking Initiative for Multimedia Evaluation (MediaEval) offers challenges related to multimedia analysis, retrieval and exploration. MediaEval tasks involve multiple modalities, (e.g., audio, visual, textual, and/or contextual) and focus on the human and social aspects of multimedia. The larger aim is to promote reproducible research that makes multimedia a positive force for society.
</p>
<p class="text">
More information can be found at <a href="https://multimediaeval.github.io/">https://multimediaeval.github.io/</a>
</p>
</div>
</div>
</div>
</div>
<div class="footer-container"></div>
</body>
</html>