Close Menu
    Trending
    • Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen
    • AIFF 2025 Runway’s tredje årliga AI Film Festival
    • AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård
    • Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value
    • Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.
    • 5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments
    • Why AI Projects Fail | Towards Data Science
    • The Role of Luck in Sports: Can We Measure It?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Detecting Malicious URLs Using LSTM and Google’s BERT Models
    Artificial Intelligence

    Detecting Malicious URLs Using LSTM and Google’s BERT Models

    ProfitlyAIBy ProfitlyAIMay 28, 2025No Comments20 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The rise of cybercrime has made fraudulent webpage detection a necessary job in making certain that the web is protected. It’s evident that these dangers, such because the theft of personal info, malware, and viruses, are related to on-line actions on emails, social media purposes, and web sites. These internet threats, known as malicious URLs, are utilized by cybercriminals to lure customers to go to internet pages that seem actual or official.

    This paper explores the event of a deep studying system involving a transformer algorithm to detect malicious URLs with the intention of bettering an current technique similar to Lengthy Brief-Time period Reminiscence (LSTM). (Devlin et al., 2019) launched a Pure language modelling algorithm (BERT) developed by Google Mind in 2017. This mannequin is able to making extra correct predictions to outperform the recurrent neural community methods similar to Lengthy Brief Time period Reminiscence (LSTM) and Gated Recurrent Items (GRU). On this mission, I in contrast the BERT’s efficiency with LSTM as a textual content classification method. With the processed dataset containing over 600,000 URLs, a pre-trained mannequin is developed, and outcomes are in contrast utilizing efficiency metrics similar to r2 rating, accuracy, recall, and many others. (Y. E. Seyyar et al., 2022). This LSTM algorithm achieved an accuracy price of 91.36% and an F1 rating of 0.90 (greater than BERT’s) within the classification when it comes to each uncommon and customary requests. Key phrases: Malicious URLs, Lengthy Brief Time period Reminiscence, phishing, benign, Bidirectional Encoder Representations from Transformers (BERT).

    1.0 Introduction

    With the usability of the Net by means of the Web, there was an growing variety of customers through the years. As all digital gadgets are linked to the web, this has additionally resulted in an growing variety of phishing threats by means of web sites, social media, emails, purposes, and many others. (Morgan, S., 2024) reported that greater than $9.5 trillion was misplaced globally attributable to leaks of personal info.

    Subsequently, progressive approaches have been launched through the years to automate the duty of making certain safer web utilization and knowledge safety. The Symantec 2016 Web Safety Report (Vanhoenshoven et al., 2016) exhibits scammers have brought on most cyber-attacks involving company knowledge breaches on browsers and web sites, in addition to different sheer malware makes an attempt utilizing the Uniform Useful resource Locator by baiting customers.

    Construction of a URL (Picture by writer)

    In recent times, blacklisting, reputation-based methods, and machine studying algorithms have been utilized by cybersecurity professionals to enhance malware detection and make the net safer. Google’s statistics reported that over 9,500 suspicious internet pages are blacklisted and blocked per day. The existence of those malicious internet pages represents a major threat to the data safety of internet purposes, significantly people who cope with delicate knowledge (Sankaran et al., 2021). As a result of it’s really easy to implement, blacklisting has grow to be the usual manner. The false-positive price can be considerably lowered with this technique. The issue, nonetheless, is that it’s extraordinarily tough to maintain an intensive listing of malicious URLs updated, particularly contemplating that new URLs are sometimes created every single day. With a purpose to circumvent filters and trick customers, cybercriminals have provide you with ingenious strategies, similar to obfuscating the URL so it seems to be actual. This area of Artificial Intelligence (AI) has seen important developments and purposes in a wide range of domains, together with cybersecurity. One crucial facet of cybersecurity is detecting and stopping malicious URLs, which may end up in severe penalties similar to knowledge breaches, id theft, and monetary losses. Given the dynamic and ever-changing nature of cyber threats, detecting malicious URLs is a tough job.

    This mission goals to develop a deep studying system for textual content classification known as Malicious URL Detection utilizing pre-trained Bidirectional Encoder Representations from Transformers (BERT). Can the BERT mannequin outperform current strategies in malicious URL detection? The anticipated final result of this examine is to show the effectiveness of the BERT mannequin in detecting malicious URLs and examine its efficiency with recurrent neural community strategies similar to LSTM. I used analysis metrics similar to accuracy, precision, recall, and F1-score to check the fashions’ efficiency.

    2.0. Background

    Machine studying strategies like Random Forest and Multi-Layer Notion, Assist Vector Machines, and deep studying strategies like LSTM and different CNN are just some of the strategies proposed by the present literature for detecting dangerous URLs. Nevertheless, there are drawbacks to those strategies, similar to the truth that they necessitate conventional options, as they’re unable to cope with advanced knowledge, thereby leading to overfitting.

    2.1. Associated works

    To enhance the time for acquiring the web page content material or processing the textual content, (Kan and Thi, 2005) used a way of categorising web sites based mostly on their URLs. Classification options had been collected from the URL after it was parsed into a number of tokens. Token dependencies in time order had been modelled by the traits. They concluded that the classification price elevated when high-quality URL segmentation was mixed with function extraction. This strategy paved the best way for different analysis on creating advanced deep studying fashions for textual content classification. As a binary textual content classification drawback, (Vanhoenshoven et al., 2016) developed fashions for the detection of malicious URLs and evaluated the efficiency of classifiers, together with Naive Bayes, Assist Vector Machines, Multi-Layer Perceptron, and many others. Subsequently, textual content embedding strategies implementing transformers have produced state-of-the-art ends in NLP duties. The same mannequin was devised by (Maneriker et al., 2021), by which they pre-trained and refined an current transformer structure utilizing solely URL knowledge. The URL dataset included 1.29 million entries for coaching and 1.78 million entries for testing. Initially, the BERT structure supported the masked language modelling framework, which might not be mandatory on this report.

    For the classification course of, the BERT and RoBERTa algorithms had been fine-tuned, and outcomes had been evaluated and in comparison with suggest a mannequin known as URLTran (URL Transformers) that makes use of transformers to considerably enhance the efficiency of malicious URL detection with very low false optimistic charges compared to different deep studying networks. With this technique, the URLTran mannequin achieved an 86.8% true optimistic price (TPR) in comparison with the perfect baseline’s TPR of 71.20%, leading to an enchancment of 21.9%. This talked about technique was capable of classify and predict whether or not a detected URL is benign or malicious.

    Moreover, an RNN-based mannequin was proposed by (Ren et al, 2019) the place extracted URLs had been transformed into phrase vectors (characters) through the use of pre-trained Word2Vec, and Bi-LSTM (bi-directional lengthy short-term reminiscence) and classifying them. After validation and analysis, the mannequin achieved 98% accuracy and an F1 rating of 95.9%. This mannequin outperformed virtually all the NLP strategies however solely processed textual content characterization separately. Nevertheless, there’s a have to develop an improved mannequin utilizing BERT to course of sequential enter unexpectedly. Though these fashions have demonstrated some enchancment with huge knowledge, they aren’t with out their limitations. The sequential nature of textual content knowledge, as an illustration, could also be tough with RNNs, whereas CNNs most occasions don’t seize long-term dependencies within the knowledge (Alzubaidi et al., 2021). As the quantity and complexity of textual knowledge on the internet proceed to extend, it’s doable that present fashions will grow to be insufficient.

    3.0. Goals

    This mission introduced the significance of a bidirectional pre-trained mannequin for textual content classification. (Radford et al., 2018) applied unidirectional language fashions to pre-train BERT. In comparison with this, a shallow concatenation of independently educated left-to-right and right-to-left linear fashions was created (Devlin et al., 2019; Peters et al., 2018). Right here, I used a pre-trained BERT mannequin to realize state-of-the-art efficiency on a big scale of sentence-level and token-level duties (Han et al., 2021) with the intention to outperform many RNNs architectures, thereby decreasing the necessity for these frameworks. On this case, the hyper-parameters of the LSTM algorithm is not going to be fine-tuned.

    Particularly, this analysis paper emphasises:

    1. Creating an LSTM and pre-trained BERT fashions to detect (classify) whether or not a URL is unsafe or not.
    2. Evaluating outcomes of the bottom mannequin (LSTM) and pre-trained BERT utilizing analysis metrics similar to recall, accuracy, F1 rating, precision. This might assist to find out if the bottom mannequin efficiency is best or not.
    3. BERT robotically learns latent illustration of phrases and characters in context. The one job is to fine-tune the BERT mannequin to enhance the baseline efficiency. This proposes a computationally easy strategy to RNNs as a substitute for the extra resource-intensive, and computationally costly architectures.
    4. Evaluation and mannequin improvement and analysis took about 7 weeks and the intention was to realize a considerably decreased coaching runtime with Google’s BERT mannequin.

    4.0. Methodology

    This part explains all of the processes concerned in implementing a deep studying system for detecting malicious URLs. Right here, a transformer-based framework was developed from an NLP sequence perspective (Rahali and Akhloufi, 2021) and used to statistically analyse a public dataset.

    Determine 4.0. Methodology Course of (Tailored from Rahali and Akhloufi, 2021)

    4.1. The dataset

    The dataset used for this report was compiled and extracted from Kaggle (license info). This dataset was ready to hold out the classification of webpages (URLs) as malicious or benign. The datasets consisting of URL entries for coaching, validation and testing had been collected.

    Picture by writer (code visualisation)

    To analyze the information utilizing deep studying fashions, an enormous dataset of 651,191 URL entries had been retrieved from Phishtank, PhishStorm, and malware area blacklist. It incorporates:

    • Benign URLs: These are the protected internet pages to browse. Precisely 428,103 entries had been identified to be safe.
    • Defacement URLs: These webpages are utilized by cybercriminals or hackers to clone actual and safe web sites. These include 96,457 URLs.
    • Phishing URLs: They’re disguised as real hyperlinks to trick customers to supply private and delicate info which dangers the lack of funds. 94,111 entries of the entire dataset had been flagged as phishing URLs.
    • Malware URLs: They’re designed to control customers to obtain them as software program and purposes, thereby exploiting vulnerabilities. There are 32,520 malware webpage hyperlinks within the dataset.
    Desk 4.1. The varieties of URLs and their fraction of the dataset (Picture by writer)

    4.2. Characteristic extraction

    For the URL dataset, function extraction was used to rework uncooked enter knowledge right into a format supported by machine studying algorithms (Li et al., 2020). It converts categorical knowledge into numerical options, whereas function choice selects a subset of related options from the unique dataset (Sprint and Liu, 1997; Tang and Liu, 2014).
    View knowledge evaluation and mannequin improvement file here. The next steps had been taken:

    1. Combining the phishing, malware and defacement URLs as Malicious URL sorts for higher choice. The entire URLs are then labelled benign or malicious.

    2. Changing the URL sorts from categorical variables into numerical values. It is a essential course of as a result of the deep studying mannequin coaching requires solely numerical values. Benign and phishing URLs are categorized as 0 and 1, respectively, and handed into a brand new column known as “Class”.

    3. The ‘url_len’ function was used to compute the URL size to extract options from the URLs within the dataset. Utilizing the ‘process_tld’ perform, the top-level area (TLD) of every URL was extracted.

    4. Some potential options for URL classification embody the presence of particular characters [‘@’, ‘?’, ‘-‘, ‘=’, ‘.’, ‘#’, ‘%’, ‘+’, ‘$’, ‘!’, ‘*’, ‘,’, ‘//’] had been represented and added as columns to the dataset utilizing the ‘abnormal_url’ function. This function (perform) makes use of binary classification to confirm if there are abnormalities in each URL character. 5. One other choice was performed on the dataset such because the variety of characters (letters and counts), https, shorting service and ip deal with of all entries. These present extra info for coaching the mannequin.

    4.3. Classification – mannequin improvement and coaching

    Utilizing pre-labelled options, the coaching knowledge learns the affiliation between labels and textual content. This stage includes figuring out the URL sorts within the dataset. As an NLP method, it’s required to assign texts (phrases) into sentences and queries (Minaee et al, 2021). A recurrent neural community mannequin structure defines an optimised mannequin. To make sure a balanced dataset, the information was break up into 80% coaching set and a 20% testing set. The texts had been labelled utilizing phrase embeddings for each the LSTM and the pre-trained BERT fashions. The dependent variables embody the encoded URL sorts (Classes) contemplating it’s an computerized binary classification.

    4.3.1. Lengthy short-term reminiscence mannequin

    LSTM was discovered to be the most well-liked structure due to its skill to seize long-term dependencies utilizing word2vec (Mikolov et al, 2013) to coach on billions of phrases. After preprocessing and have extraction, the information was arrange for the LSTM mannequin coaching, testing and validation. To find out the suitable sequence size, the quantity and measurement of layers (enter and output layers) had been proposed earlier than coaching the mannequin. The hyperparameters similar to epoch, studying price, batch measurement, and many others. had been tuned to realize optimum efficiency.

    The reminiscence cell of a typical LSTM unit has three gates (enter gate, overlook gate, and output gate) (Feng et al, 2020). Opposite to a “feedforward neural community, the output of a neuron” at any time may be the identical neutron because the enter (Do et al, 2021). To stop overfitting, a dropout perform is applied on a number of layers one after the opposite. The primary layer added is an embedding layer, which is used to create dense vector representations of phrases within the enter textual content knowledge. Nevertheless, just one LSTM layer was used on this structure as a result of lengthy coaching time.

    4.3.2. BERT mannequin

    Researchers proposed BERT structure for NLP duties as a result of it has greater total efficiency than RNNs and LSTM. A pre-trained BERT mannequin was applied on this mission to course of textual content sequences and seize the semantic info of the enter, which can assist enhance and cut back coaching time and accuracy of malicious URL detection. After the URL knowledge was pre-processed, they had been transformed into sequences of tokens after which feeding these sequences into the BERT mannequin for processing (Chang et al., 2021). As a consequence of massive knowledge entries on this mission, the BERT mannequin was fine-tuned to be taught the related options of every kind of URL. As soon as the mannequin is educated, it was used to categorise URLs as malicious (phishing) or benign with improved accuracy and efficiency.

    Google’s BERT model architecture (Tune et al, 2021)

    (Determine 4.3.2) describes the processes concerned in mannequin coaching with the BERT algorithm. A tokenization part is required for splitting textual content into characters. Initially, uncooked textual content is separated into phrases, that are then transformed to distinctive integer IDs through a
    lookup desk. WordPiece tokenization (Tune et al, 2021) was applied utilizing the BertTokenizer class. The tokenizer consists of the BERT token splitting algorithm and a WordPieceTokenizer (Rahali and Akhloufi, 2023). It accepts phrases (sentences) as enter and outputs token IDs.

    5.0. Experiments

    Particular hyper-parameters had been used for BERT, whereas an LSTM mannequin with a single hidden layer was tuned based mostly on efficiency on the validation set. As a consequence of an unbalanced dataset, solely 522,214 entries had been parsed consisting of 417,792 coaching knowledge and 104,422 testing knowledge with a train-test break up of 70% to 30%.

    The parameters used for coaching are described beneath:

    Desk 5.0. Hyperparameters used within the Keras library for the LSTM and BERT fashions (Picture by writer)

    5.1. LSTM (baseline)

    The outcomes indicated a corresponding dropout price of 0.2 and batch measurement 1024 to realize a coaching accuracy of 91.23% and validation accuracy of 91.36%. Just one LSTM layer was used within the structure attributable to lengthy coaching time (common of 25.8 minutes). Nevertheless, including extra layers to the neural community ends in a excessive
    computation drawback, thereby decreasing the mannequin’s total efficiency.

    LSTM algorithm experiment setup (Do et al, 2021)

    5.2. Pre-trained BERT mannequin

    This mannequin was tokenized however the disadvantage was the classifier couldn’t initialize at checkpoint. Subsequently, some layers had been affected. This mannequin requires additional sequence classification earlier than pre-training. The expectations weren’t met attributable to advanced computation. Nevertheless, it was proposed to have wonderful efficiency.

    6.0. Outcomes

    An experimental final result is evaluated for the 2 fashions developed utilizing efficiency metrics. These metrics are to point out how properly the check knowledge carried out on the fashions. They’re introduced to judge the proposed strategy’s effectiveness in detecting malicious internet pages.

    6.1. Efficiency Metrics

    To guage the efficiency of the proposed metrics, a confusion matrix was used attributable to its analysis measures.

    Desk 6.1 Binary classification of precise and predicted outcomes
    • True Constructive (TP): samples which are precisely predicted malicious (phishing) (Amanullah et al., 2020).
    • True Unfavourable (TN): samples which are precisely predicted as benign URLs.
    • False Constructive (FP): samples which are incorrectly predicted as phishing URLs.
    • False Unfavourable (FN): cases which are incorrectly predicted as benign URLs.
      Accuracy = (TP + TN) / (TP + TN + FP + FN)
      Precision = TP / (TP + FP)
      Recall = TP / (TP + FN)
      F1-score = (2 × Precision × Recall) / (Precision + Recall)
    Desk 6.2. Classification report for the developed fashions (Picture by writer)

    The LSTM mannequin achieved an accuracy of 91.36% and a lack of 0.25, whereas the pre-trained BERT mannequin achieved a decrease accuracy (75.9%) than anticipated because of {hardware} malfunction.

    6.2. Validation

    The LSTM carried out properly as a result of the validation knowledge accuracy will detect malicious URLs 9 out of 10 occasions.

    Accuracy validation and loss validation (LSTM). Picture by writer

    Nevertheless, the pre-trained BERT couldn’t attain a better expectation attributable to unbalance and enormous dataset.

    Confusion matrix for LSTM and BERT models (Picture by writer)

    7.0. Conclusion

    Total, LSTM fashions generally is a highly effective device for modelling sequential knowledge and making predictions based mostly on temporal dependencies. Nevertheless, it is very important fastidiously contemplate the character of the information and the issue at hand earlier than deciding to make use of an LSTM mannequin, in addition to to correctly arrange and tune the mannequin to realize the perfect outcomes. As a consequence of massive dataset, a rise batch measurement (1024) resulted in a shorter coaching time and improved the validation accuracy of the mannequin. This could possibly be because of not tokenizing the mannequin throughout coaching and testing. BERT’s most sequence size is 512 tokens, which is likely to be inconvenient for some purposes. If a sequence goes to be shorter than the restrict, tokens must be added to it, in any other case, it needs to be to be truncated (Rahali and Akhloufi, 2021). Additionally, to know phrases and sentences higher, BERT wants modified embeddings to symbolize context in character. Though these capabilities carried out properly with advanced phrase embeddings, it may also lead to longer coaching time when used with bigger datasets. Nevertheless, a necessity for additional additional analysis is required to detect patterns throughout malicious URL detection.

    References

    • Alzubaidi, L., Zhang, J., Humaidi, A. J., Duan, Y., Santamaría, J., Fadhel, M. A., & Farhan, L. (2021). Evaluation of deep studying: Ideas, CNN architectures, challenges, purposes, future instructions. Journal of Large Information, 8(1), 1-74. https://doi.org/10.1186/s40537-021-00444-8
    • Amanullah, M. A., Habeeb, R. A. A., Nasaruddin, F. H., Gani, A., Ahmed, E., Nainar, A. S. M., Akim, N. M., & Imran, M. (2020). Deep studying and massive knowledge applied sciences for IoT safety. Pc Communications, 151, 495-517. https://doi.org/10.1016/j.comcom.2020.01.016
    • Chang, W., Du, F., and Wang, Y. (2021). “Analysis on Malicious URL Detection Expertise Primarily based on BERT Mannequin,” IEEE ninth Worldwide Convention on Info, Communication and Networks (ICICN), Xi’an, China, pp. 340-345, doi: 10.1109/ICICN52636.2021.9673860.
    • Sprint, M., & Liu, H. (1997). Characteristic choice for classification. Clever Information Evaluation, 1(1-4), 131-156. https://doi.org/10.1016/S1088-467X(97)00008-5
    • Do, N.Q., Selamat, A., Krejcar, O., Yokoi, T. and Fujita, H. (2021). Phishing webpage classification through deep learning-based algorithms: an empirical examine. Utilized Sciences, 11(19), p.9210.
    • Devlin, J., Chang, M. W., Lee, Okay., & Toutanova, Okay. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
    • Feng, J., Zou, L., Ye, O., and Han, Han. (2020) “Web2Vec: Phishing Webpage Detection Methodology Primarily based on Multidimensional Options Pushed by Deep Studying,” in IEEE Entry, vol. 8, pp. 221214-221224, doi: 10.1109/ACCESS.2020.3043188
    • Han, X., Zhang, Z., Ding, N., Gu, Y., Liu, X., Huo, Y., Qiu, J., Yao, Y., Zhang, A., Zhang, L., Han, W., Huang, M., Jin, Q., Lan, Y., Liu, Y., Liu, Z., Lu, Z., Qiu, X., Tune, R., . . . Zhu, J. (2021). Pre-trained fashions: Previous, current and future. AI Open, 2, 225- 250. https://doi.org/10.1016/j.aiopen.2021.08.002
    • Morgan, S. (2024). 2024 Cybersecurity Almanac: 100 Information, Figures, Predictions and Statistics. Cybersecurity Ventures. https://cybersecurityventures.com/2024-cybersecurity-almanac/ Kan, M-Y., and Thi, H. (2005). Quick webpage classification utilizing URL options. 325- 326. 10.1145/1099554.1099649.
    • Li, Q., Peng, H., Li, J., Xia, C., Yang, R., Solar, L., Yu, P.S. and He, L. (2020). A survey on textual content classification: From shallow to deep studying. arXiv preprint arXiv:2008.00364. Maneriker, P., Stokes, J. W., Lazo, E. G., Carutasu, D., Tajaddodianfar, F., & Gururajan, A. (2021). URLTran: Bettering Phishing URL Detection Utilizing Transformers. ArXiv. /abs/2106.05256
    • Mikolov, T., Sutskever, I., Chen, Okay., Corrado, G.S. and Dean, J. (2013). Distributed representations of phrases and phrases and their compositionality. Advances in neural info processing methods, 26.
    • Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M. and Gao, J. (2021). Deep Studying–based mostly Textual content Classification. ACM Computing Surveys, 54(3), pp.1–40. doi:https://doi.org/10.1145/3439726.
    • Peters, M.E., Ammar, W., Bhagavatula, C. and Energy, R. (2017). Semi-supervised sequence tagging with bidirectional language fashions. arXiv:1705.00108 [cs]. [online] Accessible at: https://arxiv.org/abs/1705.00108.
    • Radford, A., Narasimhan, Okay., Salimans, T. and Sutskever, I. (2018). Bettering Language Understanding by Generative Pre-Coaching. [online] Accessible at: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf.
    • Rahali, A. & Akhloufi, M. A. (2021) MalBERT: Utilizing transformers for cybersecurity and malicious software program detection. arXiv Preprint arXiv:2103.03806
    • Ren, F., Jiang, Z., & Liu, J. (2019). A Bi-Directional Lstm Mannequin with Consideration for Malicious URL Detection. 2019 IEEE 4th Superior Info Expertise, Digital and Automation Management Convention (IAEAC), 1, 300-305.
    • Sankaran, M., Mathiyazhagan, S., ., P., Dharmaraj, M. (2021). ‘Detection Of Malicious Urls Utilizing Machine Learning Strategies’, Int. J. of Aquatic Science, 12(3), pp. 1980- 1989
    • Tune, X., Salcianu, A., Tune, Y., Dopson, D., and Zhou, D. (2020). Quick WordPiece Tokenization. ArXiv. /abs/2012.15524 Tang, J., Alelyani, S. and Liu, H. (2014). Characteristic choice for classification: A assessment. Information classification: Algorithms and purposes, p.37.
    • Vanhoenshoven, F., Napoles, G., Falcon, R., Vanhoof, Okay. & Koppen, M. (2016) Detecting malicious URLs utilizing machine studying strategies. IEEE.
    • Y. E. Seyyar, A. G. Yavuz and H. M. Ünver. (2022) An assault detection framework based mostly on BERT and deep studying. IEEE Entry, vol. 10, pp. 68633-68644, 2022, doi: 10.1109/ACCESS.2022.3185748.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe AI Hype Index: College students are hooked on ChatGPT
    Next Article Tree of Thought Prompting: Teaching LLMs to Think Slowly
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value

    June 6, 2025
    Artificial Intelligence

    Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.

    June 6, 2025
    Artificial Intelligence

    5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments

    June 6, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.

    June 6, 2025

    OpenAI släpper o3 och o4-mini: AI-modeller som kan tänka med bilder

    April 16, 2025

    How to Write Queries for Tabular Models with DAX

    April 22, 2025

    Pairwise Cross-Variance Classification | Towards Data Science

    June 3, 2025

    Top AI Technologies: Transforming Business Operations Guide

    April 10, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Adding Training Noise To Improve Detections In Transformers

    April 28, 2025

    Framtidens AI-modeller från OpenAI API kan kräva ID-verifiering

    April 14, 2025

    Meta släpper Llama 4 – AI nyheter

    April 6, 2025
    Our Picks

    Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen

    June 7, 2025

    AIFF 2025 Runway’s tredje årliga AI Film Festival

    June 7, 2025

    AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård

    June 7, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.