Humaina

Humaina

IT Services and IT Consulting

Your Boutique AI consultancy -- specialised in Natural Language processing / LLM / Transformers / deep learning

About us

We have extensive expertise in strategy development, proof of concept development, and the subsequent construction of AI solutions that integrate seamlessly into your business. Would you like to start a conversation with us to explore the process and expected outcomes of an AI project? We love to hear your vision, solve complex problems and build the perfect solution for your tailored business needs. Our services range from IT consulting to building end-to-end software and AI solutions.

Website
http://humaina.co.uk
Industry
IT Services and IT Consulting
Company size
2-10 employees
Headquarters
London
Type
Privately Held
Founded
2014
Specialties
data science, data engineering, full stack Engineering, IT consulting, AI consulting, Python, and Software Engineering

Locations

Employees at Humaina

Updates

  • View organization page for Humaina, graphic

    286 followers

    🚀 Exciting News! 🚀 I'm thrilled to introduce the latest innovation at Humaina – our state-of-the-art Private Language Model (LLM) platform, designed to revolutionize the way we approach consulting! With the power of advanced AI, our new private LLM application is set to transform how we deliver insights, strategies, and solutions to our clients -- without compromising GDPR 🥳!!!!! Here’s how it can boost your business: 🔍 Enhanced Data Analysis: Our choice of private Open-Source LLMs process vast amounts of data with incredible speed and accuracy, allowing you to uncover deeper insights and trends that drive informed decision-making. 🤖 Intelligent Automation: By automating routine tasks and data processing, you can now focus more on crafting innovative strategies and personalized solutions for our clients. 💬 Improved Communication: The LLM helps in generating clear, concise, and impactful reports and correspondence, ensuring you receive the most value from our engagements and a high return on investment. 💡 Innovative Problem Solving: With its ability to understand and generate human-like text, our LLMs assists in brainstorming and developing creative solutions to complex business challenges. We are committed to leveraging cutting-edge technology to enhance our services and deliver exceptional value to our clients. This is just the beginning of a new era for Humaina as we continue to push the boundaries of what's possible in consulting. Stay tuned for more updates as we integrate this powerful tool into our daily operations and client engagements. Exciting times ahead! https://lnkd.in/dwS--PMW #Innovation #AI #Consulting #LLM #BusinessGrowth #TechForGood #FutureOfConsulting #ClientSuccess

  • View organization page for Humaina, graphic

    286 followers

    AI safety is sadly an under-debated topic in the upcoming 🇬🇧 UK elections. While the Labour Party has publicly declared it wants to become a global leader in AI regulation, the ruling Tory party has confirmed they are already using their own RAG implementation to help the UK parliament manage the immeasurable amount of classified written documents this body possesses. I have not found any meaningful statement on how the Tories plan to ensure data privacy and AI ethics if re-elected or how SME's in UK can utilise AI to compete in an increasingly digital global economy. https://lnkd.in/euM3Dju2 🇹🇷 Turkey has just announced it is following in the EU footsteps and adopting a very similar regulatory framework to the EU AI act. Should the UK do the same? Which other strategically relevant countries are worth looking out for? Really appreciate your thoughts on this 🌻🤖🌈🧭

    View profile for Raymond Sun, graphic
    Raymond Sun Raymond Sun is an Influencer

    Top Voice in AI Law (Global) | Lawyer & Full-Stack Developer | Follow me for AI law & regulation updates across the world | techie_ray

    If you're into #AIregulation, add Turkey to your watch list. 🇹🇷 #Turkey is a unique market, being at the crossroads between Asia, Europe and Middle East. ▪Geographically, it's located in the heart of Eurasia (mostly in Asia minor). ▪Politically, modern Turkey is inspired by European civil law models. ▪Culturally, Turkey has a predominantly Muslim demographic. Given its central position, it's interesting to see which 'side' Turkey will follow on blurry policy issues like AI regulation, which has seen many diverse approaches across the East and West. So far, Turkey is following the #EUAIAct, as announced in its Medium-Term Program 2024-26. On 24 June, Turkey introduced the draft "Artificial Intelligence Bill No. 2/2234". Similar to the EU AI Act, this bill will broadly regulate AI systems based on risk level (e.g. prohibited systems and high-risk systems): https://lnkd.in/gRTfG_ZY Key aspects of the bill include (based on English sources): 1️⃣ Applies to stakeholders like "providers", "distributors", "importers" and "users". 2️⃣ Mandatory principles on development of AI systems (i.e. security, transparency, fairness, accountability, and privacy). 3️⃣ High-risk classification of certain AI systems (e.g. autonomous vehicles, medical diagnostic systems, AI in law enforcement) require further precautions (e.g. a conformity assessment + registration requirements). 4️⃣ Penalties for breach - e.g. TRY 35 million or 7% of annual turnover for deploying prohibited AI systems; TRY 7.5 million or 1.5% of annual turnover for providing false information related to AI. While Turkey is not yet a global leader in #AI, it has a formidable tech sector: ▪Booming innovation hub based on Turkey's young population (median age of 32) and over 100 "technology parks". ▪Strong B2C market, especially in consumer electronics, gaming and e-commerce (with large brands like Vestel and Beko). ▪Well-connected telco infrastructure, with Istanbul being a key hub for underwater cable traffic between Europe and Asia (attracting investment from big cloud companies). ▪Major producer and exporter of unmanned drone tech (though mostly for military use). As Turkey's AI sector continues to grow, I'm curious to see how Turkey will adapt and customise its EU-style AI law over time. In that sense, Turkey fits in a broad category of nations who are leaning towards the EU risk-based model, including Australia, Brazil, Canada, South Korea, Japan and Thailand (though with different levels of alignment). Out of the above countries, Turkey's strong B2C and hardware-focused sector reminds me of South Korea and Japan, both of whom are favouring more lenient "pro-business" regulation. Perhaps Turkey will follow suit? 👓 Want more? Check out my Global AI Regulation Tracker which tracks AI policy and law developments across the world on an interactive world map (see link in the 'Visit my website' button above).

    • No alternative text description for this image
  • View organization page for Humaina, graphic

    286 followers

    Very good explanation of AI guardrails by the VP of product at IBM And some great news: Our Private LLM platform for businesses operating in heavily regulated industries comes with bi-directional toxicity classifier out of the box 🎉 🎉 🎉 !!! We are further able to offer customised guardrail solutions to comply with your industry specific requirements. Contact us for an informal chat about your project and pain points implementing LLM solutions in production and let's discover together how we can help.

    View profile for Armand Ruiz, graphic
    Armand Ruiz Armand Ruiz is an Influencer

    VP of Product - AI Platform @IBM

    A key feature you cannot forget in your GenAI implementation: AI Guardrails 𝗪𝗵𝗮𝘁 𝗮𝗿𝗲 𝗔𝗜 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀? Guardrails are programmable rules that act as safety controls between a user and an LLM or other AI tools. 𝗛𝗼𝘄 𝗗𝗼 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗔𝗜 𝗠𝗼𝗱𝗲𝗹𝘀? Guardrails monitor communication in both directions and take actions to ensure the AI model operates within an organization's defined principles. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗣𝘂𝗿𝗽𝗼𝘀𝗲 𝗼𝗳 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗶𝗻𝗴 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 𝗶𝗻 𝗔𝗜 𝗦𝘆𝘀𝘁𝗲𝗺𝘀? The goal is to control the LLM's output, such as its structure, type, and quality, while validating each response. 𝗪𝗵𝗮𝘁 𝗥𝗶𝘀𝗸𝘀 𝗗𝗼 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 𝗠𝗶𝘁𝗶𝗴𝗮𝘁𝗲 𝗶𝗻 𝗔𝗜 𝗦𝘆𝘀𝘁𝗲𝗺𝘀? Guardrails can help prevent AI models from saying incorrect facts, discussing harmful subjects, or opening security holes. 𝗛𝗼𝘄 𝗗𝗼 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 𝗣𝗿𝗼𝘁𝗲𝗰𝘁 𝗔𝗴𝗮𝗶𝗻𝘀𝘁 𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗧𝗵𝗿𝗲𝗮𝘁𝘀 𝘁𝗼 𝗔𝗜 𝗦𝘆𝘀𝘁𝗲𝗺𝘀? They can protect against common LLM vulnerabilities, such as jailbreaks and prompt injections. Guardrails support three broad categories of guardrails: 1/ Topical guardrails: Ensure conversations stay focused on a particular topic 2/ Safety guardrails: Ensure interactions with an LLM do not result in misinformation, toxic responses, or inappropriate content 3/ Hallucination detection: Ask another LLM to fact-check the first LLM's answer to detect incorrect facts Which guardrails system do you implement in your AI solutions?

  • Humaina reposted this

    View organization page for Humaina, graphic

    286 followers

    The Justice Department and the Federal Trade Commission agreed to divide responsibility for investigating three major players in the artificial intelligence industry. Here at Humaina we have always been very vocal when it comes to Data Privacy and consumer rights and we see this a positive step in the right direction. However, we believe this is just a small band aid to address the growing concerns from the public with regards to how the data we generate using common productivity tools is being unapologetically used to train closed-source AI models for companies that are on a cult-like mission to build AGI (artifical general intelligence). If you refuse to agree to the updated T&C's and share your ideas, business plan, designs, customer's data with these platforms, you lose your access; meaning all the work you have stored on their servers and the network of clients and suppliers you have built over years. This investigation conducted by the US federal government is not likely to yield any measurable results, because those bodies rely on Big Tech to act as their advisors and fund them via lobbying. With the help of experienced consultants who understand the latest AI tools as well as the regulatory landscape you can optimize your business processes whilst staying true to your values and remaining fully in control of your data and AI safety. Contact us for an informal chat.

    View profile for Mark Montgomery, graphic

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    Big news from the NYT this morning -- the AG will lead an antitrust investigation into the AI market with a focus on Microsoft, OpenAI and Nvidia. Although Nvidia may be a surprise to some, it would be difficult to investigate the AI market without also investigating Nvidia, with particular interest on supply of chips. If for example companies have been acquiring large amounts of specialty chips like H100 GPUs to prevent competition rather than for generating revenue, then that could be a prosecutable behavior. It's recently been reported that Microsoft is the largest customer of Nvidia, and Nvidia's CFO said that about 45% of their datacenter business is from Big Techs. Since Big Techs have more money than anyone else, much of which comes from monopolies, if they are using that market power to prevent competition, whether by attempting to buy all the chips, specialty talent, or startups, then that's clearly anticompetitive behavior and probably illegal. There is also no question in my mind and many others that's precisely what's been occurring for many years now. I think it's probably good news that the AG Antitrust Division is leading the investigation as they have a better track record than the FTC. The real question is how much the presidential election year is influencing this. I don't have enough evidence to comment on that other than to say I think the Big Tech antitrust issue is bipartisan, and appears to be becoming a much larger concern across society, industries, and countries.

    U.S. Clears Way for Antitrust Inquiries of Nvidia, Microsoft and OpenAI

    U.S. Clears Way for Antitrust Inquiries of Nvidia, Microsoft and OpenAI

    https://www.nytimes.com

  • Humaina reposted this

    View organization page for Humaina, graphic

    286 followers

    If you have recently agreed to Adobe's updated T&C's you might soon find stochastic variations of your creative work all over the internet. But there will be no attribution or compensation for you. You consented to this 🤓.

    View profile for Axel C., graphic

    3x founder | I share tips on how to embrace AI to help business owners drive revenues and efficiency | Founder & CEO @Coming Soon (AI Chatbots, Voice Analytics, Automation)

    EVERY SOFTWARE COMPANY seems to be gradually updating its Terms for AI. Here we have an Adobe Photoshop user locked out of the application until he's agreed to the new Terms of Use ✨🎨 One section of the terms reads 👀 "Solely for the purposes of operating or improving the Services and Software, you grant us a non-exclusive, worldwide, royalty-free sublicensable, license, to use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate the Content. For example, we may sublicense our right to the Content to our service providers or to other users to allow the Services and Software to operate with others, such as enabling you to share photos." Users on Twitter (X) have raised the issue that as professionals, they are often under NDA and don't feel comfortable signing up to terms that allow Adobe or others to access their content or train their AI models. "Designer Wetterschneider, who counts DC Comics and Nike among his clients, was one of the graphics pros to object to the terms." "Here it is. If you are a professional, if you are under NDA with your clients, if you are a creative, a lawyer, a doctor or anyone who works with proprietary files – it is time to cancel Adobe, delete all the apps and programs. Adobe can not be trusted." (see comments) Should users be entitled to retrieve your cloud hosted files before having to approve the new terms? Slack recently had to clarify their own terms to their users. Do you expect more software companies to follow this path, in their quest to develop and train their own AI models? Will this encourage more users to self-host files and to seek out open-source applications that don't impose such terms? #ai #adobe #photoshop #terms #tou #nda #llm #genai Source: https://lnkd.in/dvcGyhui

  • Humaina reposted this

    View organization page for Humaina, graphic

    286 followers

    As AI technology advances, the debate over safety and liability intensifies. California's SB 1047 aims to hold AI developers accountable, requiring companies spending over $100M on training frontier models to conduct rigorous safety testing. Critics argue it stifles innovation, while supporters emphasize the need for responsible AI development. At Humaina, we recognize the importance of balancing innovation with safety. That's why we offer bespoke safeguarding mechanisms for our private OpenSource LLMs, tailored to your industry's and clients' needs. Ensuring AI safety doesn't mean halting progress; it means fostering trust and reliability. https://lnkd.in/gMCyEmRt #AI #Innovation #Safety #Humaina #TechRegulation #AIDevelopment

    The AI bill that has Big Tech panicked

    The AI bill that has Big Tech panicked

    vox.com

  • Humaina reposted this

    View organization page for Humaina, graphic

    286 followers

    As AI technology advances, the debate over safety and liability intensifies. California's SB 1047 aims to hold AI developers accountable, requiring companies spending over $100M on training frontier models to conduct rigorous safety testing. Critics argue it stifles innovation, while supporters emphasize the need for responsible AI development. At Humaina, we recognize the importance of balancing innovation with safety. That's why we offer bespoke safeguarding mechanisms for our private OpenSource LLMs, tailored to your industry's and clients' needs. Ensuring AI safety doesn't mean halting progress; it means fostering trust and reliability. https://lnkd.in/gMCyEmRt #AI #Innovation #Safety #Humaina #TechRegulation #AIDevelopment

    The AI bill that has Big Tech panicked

    The AI bill that has Big Tech panicked

    vox.com

  • View organization page for Humaina, graphic

    286 followers

    As AI technology advances, the debate over safety and liability intensifies. California's SB 1047 aims to hold AI developers accountable, requiring companies spending over $100M on training frontier models to conduct rigorous safety testing. Critics argue it stifles innovation, while supporters emphasize the need for responsible AI development. At Humaina, we recognize the importance of balancing innovation with safety. That's why we offer bespoke safeguarding mechanisms for our private OpenSource LLMs, tailored to your industry's and clients' needs. Ensuring AI safety doesn't mean halting progress; it means fostering trust and reliability. https://lnkd.in/gMCyEmRt #AI #Innovation #Safety #Humaina #TechRegulation #AIDevelopment

    The AI bill that has Big Tech panicked

    The AI bill that has Big Tech panicked

    vox.com

Similar pages

Browse jobs