Registro completo de metadatos
Campo DC Valor Lengua/Idioma
dc.rights.licenseReconocimiento 4.0 Internacional. (CC BY)es
dc.contributor.authorMayr, Franzes
dc.contributor.authorYovine, Sergioes
dc.contributor.authorVisca, Ramiroes
dc.date.accessioned2021-09-30T13:22:36Z-
dc.date.available2021-09-30T13:22:36Z-
dc.date.issued2021-02-
dc.identifier.urihttps://hdl.handle.net/20.500.12381/457-
dc.description.abstractThis paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.es
dc.language.isoenges
dc.publisherMDPIes
dc.rightsAcceso abiertoes
dc.sourceMachine Learning and Knowledge Extractiones
dc.subjectrecurrent neural networkses
dc.subjectprobably approximately correct learninges
dc.subjectblack-box explainabilityes
dc.titleProperty Checking with Interpretable Error Characterization for Recurrent Neural Networkses
dc.typeArtículoes
dc.subject.aniiCiencias Naturales y Exactas
dc.subject.aniiCiencias de la Computación e Información
dc.identifier.aniiPOS_ICT4V_2016_1_15, FSDA_1_2018_1_154419, FMV_1_2019_1_155913.es
dc.type.versionPublicadoes
dc.identifier.doihttps://doi.org/10.3390/make3010010-
dc.anii.subjectcompleto//Ciencias Naturales y Exactas/Ciencias de la Computación e Información/Ciencias de la Computación e Informaciónes
Aparece en las colecciones: Publicaciones de ANII

Archivos en este ítem:
archivo  Descripción Tamaño Formato
make-03-00010.pdfDescargar 981.25 kBAdobe PDF

Las obras en REDI están protegidas por licencias Creative Commons.
Por más información sobre los términos de esta publicación, visita: Reconocimiento 4.0 Internacional. (CC BY)