Следене
Atsunori Ogawa
Atsunori Ogawa
NTT Communication Science Laboratories
Потвърден имейл адрес: ieee.org - Начална страница
Заглавие
Позовавания
Позовавания
Година
Improving transformer-based end-to-end speech recognition with connectionist temporal classification and language model integration
T Nakatani
proc. INTERSPEECH 2019, 1408-1412, 2019
2702019
The NTT CHiME-3 system: Advances in speech enhancement and recognition for mobile multi-microphone devices
T Yoshioka, N Ito, M Delcroix, A Ogawa, K Kinoshita, M Fujimoto, C Yu, ...
2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU …, 2015
2692015
Single channel target speaker extraction and recognition with speaker beam
M Delcroix, K Zmolikova, K Kinoshita, A Ogawa, T Nakatani
2018 IEEE international conference on acoustics, speech and signal …, 2018
2142018
Linear prediction-based dereverberation with advanced speech enhancement and recognition technologies for the REVERB challenge
M Delcroix, T Yoshioka, A Ogawa, Y Kubo, M Fujimoto, N Ito, K Kinoshita, ...
Reverb workshop, 2014
1282014
Speaker-aware neural network based beamformer for speaker extraction in speech mixtures
K Žmolíková, M Delcroix, K Kinoshita, T Higuchi, A Ogawa, T Nakatani
Proc. Interspeech 2017, 2655-2659, 2017
1232017
Low-latency real-time meeting recognition and understanding using distant microphones and omni-directional camera
T Hori, S Araki, T Yoshioka, M Fujimoto, S Watanabe, T Oba, A Ogawa, ...
IEEE transactions on audio, speech, and language processing 20 (2), 499-513, 2011
1062011
Error detection and accuracy estimation in automatic speech recognition using deep bidirectional recurrent neural networks
A Ogawa, T Hori
Speech Communication 89, 70-83, 2017
942017
Semi-Supervised End-to-End Speech Recognition.
S Karita, S Watanabe, T Iwata, A Ogawa, M Delcroix
Interspeech, 2-6, 2018
792018
Strategies for distant speech recognitionin reverberant environments
M Delcroix, T Yoshioka, A Ogawa, Y Kubo, M Fujimoto, N Ito, K Kinoshita, ...
EURASIP Journal on Advances in Signal Processing 2015, 1-15, 2015
762015
Multimodal SpeakerBeam: Single Channel Target Speech Extraction with Audio-Visual Speaker Clues.
T Ochiai, M Delcroix, K Kinoshita, A Ogawa, T Nakatani
INTERSPEECH, 2718-2722, 2019
602019
Learning speaker representation for neural network based multichannel speaker extraction
K Žmolíková, M Delcroix, K Kinoshita, T Higuchi, A Ogawa, T Nakatani
2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 8-15, 2017
552017
Semi-supervised end-to-end speech recognition using text-to-speech and autoencoders
S Karita, S Watanabe, T Iwata, M Delcroix, A Ogawa, T Nakatani
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
532019
Context adaptive deep neural networks for fast acoustic model adaptation in noisy conditions
M Delcroix, K Kinoshita, C Yu, A Ogawa, T Yoshioka, T Nakatani
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
512016
Auxiliary Feature Based Adaptation of End-to-end ASR Systems.
M Delcroix, S Watanabe, A Ogawa, S Karita, T Nakatani
Interspeech 2018, 2444-2448, 2018
472018
Text-informed speech enhancement with deep neural networks.
K Kinoshita, M Delcroix, A Ogawa, T Nakatani
INTERSPEECH, 1760-1764, 2015
462015
Balancing acoustic and linguistic probabilities
A Ogawa, K Takeda, F Itakura
Proceedings of the 1998 IEEE International Conference on Acoustics, Speech …, 1998
411998
Spatial correlation model based observation vector clustering and MVDR beamforming for meeting recognition
S Araki, M Okada, T Higuchi, A Ogawa, T Nakatani
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
402016
Speech recognition in the presence of highly non-stationary noise based on spatial, spectral and temporal speech/noise modeling combined with dynamic variance adaptation
M Delcroix, K Kinoshita, T Nakatani, S Araki, A Ogawa, T Hori, ...
Proc. 1st Int. Workshop on Machine Listening in Multisource Environments …, 2011
382011
ASR error detection and recognition rate estimation using deep bidirectional recurrent neural networks
A Ogawa, T Hori
2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015
342015
Frame-level phoneme-invariant speaker embedding for text-independent speaker recognition on extremely short utterances
N Tawara, A Ogawa, T Iwata, M Delcroix, T Ogawa
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
322020
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20