There is a group of people that have special communication needs, because of some disabilities. Problems in interactions with others cause deficits in social experiences and thus influence quality of life and have devastating effect on their personalities. Therefore it seems reasonable to try to amend such situations with a support of technologies involving computers. Unfortunately, the computers interfaces are usually handled with the help of keyboards, mice and display monitors and some people are not able to access such usual equipment. However, contemporary hardware and software may help impaired or handicapped people communicate, live and work more or less normally, because there are several ways of analyzing their intentions that are expressed, e.g., by brain activity, eye movement, head movement, facial gestures, touch, speech, use of feet, use of breath and mouth. In this article, we discuss human-computer interface systems, consisting of a device analyzing physical signals coming from the sources described above and a graphical or non-graphical user interface that could be operated by the disabled persons. We also describe in some details a new electronic platform, called C-Eye, that not only makes possible a communication with a person using it, but also allows an integration of some medical measurements and computer techniques based on man - computer interaction.
 H. Beigi, Fundamentals of Speaker Recognition, Springer, New York, 2011.
 J. Brownlee, An Introduction to Feature Selection, Machine Learning Process Vol. 6, 2014.
 E. Brzdęk, The educational and technological aspects of the Alternative and Augmentative Communication (AAC), International Journal of Education and Information Technologies Vol. 12, 2018, pp. 75-79.
 E. Brzdęk, The Warnke method as a support in education of children with mild intellectual disabilities, International Journal of Education and Information Technologies Vol. 12, 2018, pp. 64-68.
 G. Buzsaki, Rhythms of the brain. Oxford University Press, Oxford, 2006.
 G. Edelman, Neural Darwinism - The Theory of Neuronal Group Selection, Basic Books, New York 1987.
 E.B. Huey, The Psychology and Pedagogy of Reading, MIT Press, Cambridge, MA, 1968. (First published in 1908 by Macmillan)
 B.H. Juang and L.R. Rabiner, Automatic speech recognition – a brief history of the technology development, Georgia Institute of Technology, Atlanta and Rutgers University and the University of California, Santa Barbara.
 R.B. Livingston, Brain mechanisms in conditioning and learning, Neurosciences Research Program Bulletin Vol. 4 (3), 1966, pp. 349-354.
 D.J.C. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003.
 E. Niedermeyer and F.L. da Silva, Electroencephalography: Basic Principles, Clinical Applications, and Related Fields, Lippincott Williams & Wilkins, 2004.
 V.F. Pamplona et al., The image-based data glove, in: Proceedings of X Symposium on Virtual Reality (SVR'2008), Joao Pessoa, 2008, pp. 204-211, Anais do SVR 2008, SBC, Porto Alegre, 2008.
 D. Reynolds and R. Rose, Robust text-independent speaker identification using Gaussian mixture speaker models, IEEE Transactions on Speech and Audio Processing Vol. 3 (1), 1995, pp. 72-83.
 S. Sanei and J.A. Chambers, EEG Signal Processing, John Wiley & Sons Ltd., 2013.
 D.J. Sturman and D. Zeltzer, A survey of glove-based input, IEEE Computer Graphics and Applications Vol. 14 (1), 1994, pp. 30-39.
 C.H. Vanderwolf, Are neocortical gamma waves related to consciousness?, Brain Research Vol. 855 (2), 2000, pp. 217-24.
 S.A. Wills and D.J.C. MacKay, DASHER - an efficient writing system for brain–computer interfaces?, IEEE Transactions on Neural Systems and Rehabilitation Engineering Vol. 14 (2), 2006, pp. 244-246.
 A.L. Yarbus, Eye Movements and Vision, Plenum, New York, 1967.