Python Program for Binary To Decimal Conversion
Binary to Decimal Conversion
In this article we will discuss binary to decimal conversion in Python. For this purpose we need to take a binary integer number from user and convert that binary integer number to its decimal equivalent form and then print the converted number on to the screen. A Decimal number can be calculated by multiplying every digits of binary number with 2 to the power of the integers starts from 0 to n-1 where n refers as the total number of digits present in a binary number and finally add all of them.
Working :
A Decimal number can be calculated by multiplying every digits of binary number with 2 to the power of the integers
starts from 0 to n-1 where n refers as the total number of digits present in a binary number and finally add all of them.
Methods Discussed :
- Algorithmic way (Binary to Decimal)
- Inbuilt Method (Binary to Decimal)
Method 1
Algorithm :
- While num is greater then zero
- Store the unit place value of num to a variable (rem)
- Calculate rem with base and add it to answer
- Completely divide Num by 10 and multiply base with 2
Time and Space Complexities
- Time Complexity – O(N), where N is the count of the digits in the binary number
- Space Complexity – O(1), Constant Space
Python Code :
num = 10 binary_val = num decimal_val = 0 base = 1 while num > 0: rem = num % 10 decimal_val = decimal_val + rem * base num = num // 10 base = base * 2 print("Binary Number is {}\nDecimal Number is {}".format(binary_val, decimal_val))
Output :
Binary Number is 10 Decimal Number is 2
Prime Course Trailer
Related Banners
Get PrepInsta Prime & get Access to all 200+ courses offered by PrepInsta in One Subscription
#we also can convert any binary number to decimal with this code efficiently
def BtoD(n):
Sum=0
for i in range(len(n)):
if(int(n[i])==0 or int(n[i])==1):
Sum=Sum + (int(n[i])*(2**(len(n)-(i+1))))
else:
n=input(“Enter valid binary number: “)
return BtoD(n)
print(Sum)
n=input(“enter your binary number to convert in decimal number: “)
BtoD(n)
#we also can convert any binary number to decimal with this code efficiently
def BtoD(n):
Sum=0
for i in range(len(n)):
if(int(n[i])==0 or int(n[i])==1):
Sum=Sum + (int(n[i])*(2**(len(n)-(i+1))))
else:
n=input(“Enter valid binary number: “)
return BtoD(n)
print(Sum)
n=input(“enter your binary number to convert in decimal number: “)
BtoD(n)
def bintodec(num) :
n = len(str(num))
sum = 0
for i in str(num) :
sum =sum + int(i) * 2**(n-1)
n -= 1
return sum
num = int(input(“enter the binary number”))
res = bintodec(num)
print(res)
I THINK THE ABOVE PROGRAM BY @SHUBHANSHU ARYA IS WRONG BECAUSE IT IS EVEN ACCEPTING IF WE GIVE NUMBERS IN 2,3,4…
SO MY CODE IS:
def bintodec(n):
sum= 0
x=1
while(n>0):
r=n%10
if(r==0):
sum=sum+pow(x,0)
x = x*2
n=n//10
elif(r==1):
sum=sum+pow(x,1)
x=x*2
n=n//10
else:
print(“invalid input”)
break
return sum
n=int(input(“enter number”))
print(“DECIMAL OF {} IS BINARY OF {}”.format(bintodec(n), n))
#decimal to binary
N=int(input())
string=str (N)
arr=[]
for i in (string):
arr.append(int(i))
l=len(arr)
s=0
j=l-1
while j>=0:
for i in range(0,l,1):
s=s+ ((2**i) * arr[j])
j=j-1
print(s)
num1=int(input(“Enter the 1st number “))
sum=0
temp=num1
base=1
while(temp):
digit=temp%10
sum+=digit*(base)
temp//=10
base*=2
print(“{} is the decimal number of {}”.format(sum,num1))
num1 = int(input(“Enter a value 0 or 1:”))
num2 = int(input(“Enter a value 0 or 1:”))
num3 = int(input(“Enter a value 0 or 1:”))
num4 = int(input(“Enter a value 0 or 1:”))
binary= ((2**0)*num4+(2**1)*num3+(2**2)*num2+(2**3)*num1)
print(“the decimal no. for given binary is {} “.format(binary))
a=input(“enter the value :”)
c=0
for x in range(len(a)):
b=int(a[-1-x])*pow(2,int(x))
c=c+b
print(c)
num =input(“Enter The Binary Number:”)
n=(num)
x=n[::-1]
sum=0
for i in range(0,len(x)):
a=int(x[i])*2**i
sum=sum+int(a)
print(“The Decimal No is=”,sum)
import math
a = input(“Enter binary number: “)
k = len(a)
decimal = 0
for i in range(0,k):
if int(a[(k-1)-i]) == 1:
decimal= decimal + pow(2,i)
print(“decimal: “, decimal)
raw = input()
l = len(raw)
dec = 0
for i in range(l):
dec += int(raw[i])*(2**(l-i-1))
print(dec)
perfect